top of page

Swinging Blindly at Fortuna's Wheel

May 15, 2024

          Allow me to set a scene for you; it's January of 2024, and huddled together around a record player is Bill Gates, the seventh-richest person in the world as of writing this, and Sam Altman, the CEO of OpenAI, creator of some of the most advanced artificial intelligence tools in the world, like ChatGPT. Gates has just finished interviewing Altman about his work at OpenAI, and the ways in which his company has and will continue to influence the industry, Silicon Valley, and the world. But now he asks him what kind of music he listens to! And Sam introduces the Max Richter recomposition of Vivaldi's "Four Seasons" - one of the most famous works of classical music in the world. As the first sounds of Spring fade in, the two men sit quietly around the turntable, Altman hunched forward, ready to adjust the needle if necessary, his expression reverent. Gates, leaning back a bit in his chair, with a politely bemused look on his face. A little bit after the music gets fully underway, and the silence has gone on a bit too long, Gates interrupts, "There's parts of "Four Seasons" that are really famous, right?".

​

          "Yeah," Altman says.

 

          "Cause its been used as a theme song for a lot of different things." Gates continues, jogging his own memory.

 

          "Yeah. Yeah." Altman, again.

 

          "And do you wear headphones?" Gates probes further.

 

          "I do." Classic Altman.

 

          "And do your colleagues give you a hard time about listening to classical music?" I urge you all to find this clip and watch it, because I love the way Gates giggles at the end of this question, invoking all of the nervous and nerdy snobbery that Silicon Valley is built upon.

 

          "Uhh.. I don't think they know what I listen to because I do wear headphones." Good point there, Altman.

 

          It should be said, since I'm being perhaps a little too tongue and cheek, that I did appreciate the other, more serious sections of Gate's interview with Altman. And though my attitude may seem ironic, I genuinely thought it kind of beautiful that Vivaldi's "Four Seasons", a piece written closer to the fall of Constantinople than the release of OpenAI's ChatGPT, can still sharpen the focus of Sam Altman, one of the most important figures working today. But the exchange between Gates and Altman also highlights the greatest weakness that Silicon Valley has always had, and the danger that that weakness poses moving forward. How many people in the tech industry - a catchall that has grown from a strictly computer manufacturing industry, to an entity that is responsible, essentially, for the creation, maintenance, and regulation of the infrastructure of the modern world - have listened to Antonio Vivaldi's "Four Seasons"? Or know anything about the works of Chaucer? Or have seen more than one wood block painting by  Katsushika Hokusai? These are random examples, but the point I'm driving at is that the people who now find themselves responsible for the cultural legacy, social stability, and perhaps even the future survival of our entire species, cannot and should not bear that responsibility alone.

 

          During the dawn of the then-called "Information Age", none of this was a problem. The foundational structure of what is now broadly called "The Internet" - computers, servers, satellites, and undersea cables - has very few cultural implications in and of itself.  Rather, it was an engineering problem (how do we design a system that will work the way we want it to?) and a marketing problem (how do we convince people this is how the world will work in the future?). But even if it wasn't a concern at the time, the "culture" of Silicon Valley in its infancy is still very important to this story. And though there have been many notable contributions to the tech industry by women and minorities, the tech culture is one of predominantly white, predominantly male, nerds. But it's a dangerous and far too often held supposition in Silicon Valley, especially amongst those at the top of the corporate ladder, that they know what's best for everybody. Which goes along with the even more sinister implication that therefore their culture also knows what's best for everybody.

 

          But it would be ridiculous to claim that if Silicon Valley was just made more culturally diverse then it would automatically become better suited to managing the modern world in the way that it does. Another key failing of the tech industry is its lack of diversity in perspective. There is a critically important and oft-overlooked difference in perspective that comes from one's field of study. In simple terms, if you're an engineer or a computer scientist, you see problems in black and white. Either the system you built works, or it doesn't. And with such an efficient and easy-to-understand worldview, it's tempting to apply it to every area that Silicon Valley touches. But when dealing with systems as complex and nuanced as the cutting-edge artificial intelligence models have become (GPT4 was trained on over 1.7 trillion parameters), this kind of thinking becomes dangerous. One of the most prominent AI doomsday theories comes from the idea that a general intelligence AI might take a straightforward instruction, like, "optimize commercial air traffic to run more efficiently" and respond by having planes take off without having boarded any passengers, or conducting necessary safety checks, or take off and land at velocities that would damage passengers and cargo, fulfilling its straightforward instruction, but causing a lot of damage and chaos in doing so. You can see how such an intelligence could get out of hand. But could these kinds of scenarios, which are imagined by the very sort of black-and-white straightforward thinking which would produce them, be avoided if the people working on training AI models could themselves think in a more nuanced way? It has already been demonstrated that Artificial Intelligence models trained to, say, select the best candidates for a job, will inherit the bias of the people the AI is trained on oftentimes excluding qualified women or people of color from opportunities in the same way that humans do. Could the same be true for the way that AI thinks about problems generally speaking? I don't think it's a stretch to say that if an AI was trained by a bunch of comparative literature and sociology students, it would emerge with a very different character than one trained by data analysis and computer science students.

 

          It is a tragedy of Shakespearean proportions - that many of those in Silicon Valley whose work will have the gravest consequences for us all in the coming years and decades should be operating so independently in every sense of the word: independent of oversight and regulation, sure, but also isolated from fields of study, philosophies, cultures, languages, and world views that they know nothing about but are still vital to take into consideration when the fate of the human race may be hanging in the balance.

 

          Silicon Valley's high stakes AI arms race is scary to think about. Especially when it seems like the people working on it have very little accountability or guardrails preventing them from sinking us all into dystopia. But I would be remiss if I finished off this article imploring the tech industry to change, allowing myself and all of you reading to pat ourselves on the back thinking how oh-so informed we are and how oh-so reasonable we are, without talking about the responsibility we all have in helping to alleviate these issues.

 

          In a world where inter-connectivity is easier than ever before, it is alarming how little meaningful communication takes place between professionals so entrenched in their own fields of expertise. This problem is very well documented across a wide range of study. When generalists are in short supply, and specialists have no common ground to stand on, solutions to complex interdisciplinary problems can go unnoticed, and the full scale of the problems we face cannot be collectively conceptualized. Could it be that, as some suggest, the days of generalists are over? That the world has grown so complicated as to make it impossible for one person to conceive of an entire world of problems and fields of study? Or perhaps are the failures of the tech industry to incorporate a more diverse range of perspectives into its work a symptom of a much more endemic problem with the academic institutions that train its workers? These are all important questions, but I'm in no position here to offer answers. What I will say though is that I believe it the responsibility of every working adult in the United States to inform themselves on a wider range of issues. I can't very well expect every Google employee to pick up Plato's Republic or go see a Rembrandt Exhibition, but if everyone, especially those working in tech, could put a little more consideration into cultivating a broad, dare I say "generalist" understanding of the world, the problems in the world might converge into something a good deal more manageable.

​

- ALGC

Welcome to the Footer! Use the link to the right to sign up for project-related email updates. Don't worry: I hate sending emails just about as much as you hate getting them. I think we'll work well together in that sense.

bottom of page