I don’t have any problem with optimists. Some of my best friends are optimists! But on the question of artificial intelligence, the noise of their exuberance distracts from problems for which there are no easy solutions.
The optimists note that being a Luddite hasn’t traditionally paid. Indeed, those who voiced concerns about the introduction of the electric streetlight (on account of what it would mean for the careers of the lamplighters, then employed in cities across the world to ignite the gas at dusk) are not attractive role models. But for some nagging concerns, the optimists are on their firmest ground when they argue that the AI boom is likely to contribute to long-run economic growth, even allowing for some bubbles and contractions and political difficulties along the way.
The lamplighters were able to find other things to do amid the explosion of dynamic new industries made possible by the mastery of electricity, a story that continues into today’s digital revolution, of which AI is now the most visible element. Their great-great-grandchildren can go to good schools and don’t have to work blue-collar jobs at all—which is a lucky thing, because (proportionally) there are far fewer muscle-workers today than there are programmers and customer service reps and managers of all kinds. Of course, just what the brain-workers are meant to do when technology now comes for their jobs is not yet clear.
“A world in which LLMs are increasingly relied on to interpret the world and to formulate our own thoughts in return is a world in which the LLMs and those who manage them are going to be enormously powerful.”
Aaron MacLean
“Each wave of technology has unsettled societies, produced moral panics, and generated problems both anticipated and unexpected. Each has also expanded human flourishing as measured by prosperity, longevity, and opportunity.”
James Pethokoukis
But the Luddites were wrong in the past and economists are confident they will be wrong this time, too. Let’s hope so. Because if the economists are off—if the labor market disruptions last a little too long before achieving new equilibria—then one safe prediction is that the era of right-wing populism is going to be supplanted by an era of left-wing populism. In 1994, with communism in disrepute, Edward Luttwak called the coming right-wing reaction to peak liberalism like Babe Ruth calling his shot. Three decades seems just about right for the pendulum to swing back, as all of the no-longer-upwardly-mobile-but-well-credentialled-and-quite-resentful-(former)-Uber-drivers (because now the cars will drive themselves) propel the Mamdanis and Ocasio-Cortezes to generational political competitiveness. Your ex-DoorDash man with a degree from CUNY isn’t going to vote for the party of unbridled enthusiasm for AI!
Another area in which the optimists are likely to prevail: The notion that a godlike AI is going to destroy mankind in some straightforward adversarial fashion—like Westworld, but without the fun bits—seems premature. World-wrecking AI seems to require it first achieving the semi-sentience implied by the terms “general” or “super” artificial intelligence—and the confidence with which AI’s boosters have been predicting the imminence of such a development in recent years would make a medieval alchemist blush. But such a development is unlikely to come soon.
Indeed, in recent months, I have detected a new restraint in such predictions, as progress in the capacity of Large Language Models seems to be slowing—or, one might surmise, approaching a limit. Pinocchio can do many amazing things, some of which exceed human capacity. (AI’s most powerful applications are most visible in anything that involves dealing with very large amounts of data, with already impressive applications in medical research, battlefield targeting, and intelligence or law enforcement work. Research assistants: It’s definitely coming for your jobs!) But the one thing Pinocchio just can’t be is a real boy. Perhaps some future revolution in biotechnology will render this prediction wrong, but it’s not yet on the horizon, and just-one-more data center isn’t going to get us to “artificial superintelligence.” Indeed, it’s easy to see a near- or mid-term future where the failure of LLMs to achieve the most over-the-top promises of their makers leads to a collapse in market value even while, in the long run, the technology contributes to growth and, indeed, alters the very nature of human existence.
And it is in this area—the consequences of even non-“super” AI for being human, as a consequence of the habitual use by regular people of LLMs—that the major risks lie, and where the ice beneath the optimists is thinnest.
One can be skeptical of the most robust claims of AI’s boosters and still acknowledge the significance of the moment. The digital revolution as a whole is, as my former classmate Antón Barba-Kay puts it in his brilliant book A Web of Our Own Making, “a second domestication of fire. … Where by Prometheus’s gift we once began by taming heat to hearth, we have now made light smart with meaning – cool fire of the disembodied mind.” The comparison is worth considering. As I have pointed out in another commentary on this subject, scholars believe the mastery of fire had profound effects for human nature itself—making our nutritional processes more efficient, our brains bigger, and (as a knock-on effect) our bodies weaker. This tight, consistent interplay between technology and human nature is effectively unique to our species. It separates man from the animals and has made him master.
The notion that we are in a new Promethean moment, once lodged in the mind, nags. And the greatest threat in such a moment will not be to our species’ wealth or power, but to its freedom.
More than 17 years ago, the journalist Nicholas Carr asked, in the spirit of contrarianism, “Is Google making us stupid?” The answer today is an unambiguous “yes,” and the evidence is everywhere—so ubiquitous and totalizing as almost to avoid notice. Consider your own lived experience, and how the various digital claims on your attention conspire to interrupt any kind of intellectual focus. (Given the business strategies that drive these distractions, “conspire” is exactly the right word.) It is true that the immediate access to any information anywhere in the world available on our devices has saved us all (and especially students) a lot of time that was once spent looking things up, wandering stacks, asking questions, committing things to memory, taking copious notes, etc. But the mental discipline, the opportunities for serendipity, the sheer need to mentally organize things and to focus patiently in order to get what we need—that is all diminished as well. It is into this existing revolutionary situation that AI has emerged, now saving us the time not only of digesting information and considering the thoughts of other humans at length, but of producing our own thoughts as well.
Full marks for efficiency. And what are we doing with all our new spare whitespace on the calendar? The economist Tyler Cowen reports that he is spending his “liberated time and energy” staying “busy reading some Trollope and also thinking about my next Free Press column.” Good for him! I am here to report that this is not how most people, especially young people, are using their time. Look around you on public transportation and see how many people are consuming and consumed by algorithmically-driven, addictive, short-format video slop. Even more concerning, it seems that undergraduates at elite colleges can no longer read at all, in the traditional grown-up sense of the term. Academic rigor at our most elite schools is, quite literally, a joke. Indeed, consider entrance exams for a far-less-competitive Harvard University from the middle of the 19th century, and ask yourself, how would Harvard graduates do on the exam today?
“Having a machine ‘write’ for a student is like sending a robot to the gym to lift an athlete’s weights for him—and the results will be comparable.”
One cannot pin the whole of the decline of our educational system, including elite higher education, on the digital revolution, but it would take some nerve to reject that it is playing a significant role. The new norm is for everyone, all but literally, to cheat on their assignments. I am friends with several college teachers who are no longer assigning papers at all. The LLMs are good enough that professors can’t detect fraud anymore, so what would be the point?
The single most concerning aspect of all of this is the apparent demise of student writing—even worse than the decline of reading, itself a dire phenomenon. If you let students have a machine do their writing for them, they won’t really remember anything that they have “written” about by proxy. (There’s a recent study that shows just this, if you are the kind of person who needs quantitative research to demonstrate the blitheringly obvious to you.) But that’s not the real problem—by not actually writing, students are not exercising the intellectual muscles that writing depends upon. If you can’t write something in plain and clear language, you have not thought it, not really. The unforgiving accountability of the sentence on the page, which either makes sense or doesn’t, requires one to think in order to compose it. In other words, writing is effectively thinking. That’s why we have had students do so much of it—not because we think that their work contributes to the store of human knowledge, but for their own sake. Having a machine “write” for a student is like sending a robot to the gym to lift an athlete’s weights for him—and the results will be comparable.
The nature of a society composed of and led by people so-“educated” is difficult to picture, though we are careening toward it. This society may very well be wealthy, and powerful, but—absent the urgent imposition of disruptive countermeasures—it is hard to see how it remains free. Political freedom presupposes free men, and a free man is someone educated for liberty: capable of thinking independently, assessing received wisdom with due skepticism, interpreting complexity, and arguing cogently. Again, one can’t pin the entire distance between this ideal and our visible reality on the digital revolution—but it sure ain’t helping.
Are these developments “bad”? Well, given the possibility that the outsourcing of our intellectual labors to machines could change our fundamental nature over time, it’s hard to pin down the objective measure on which we could offer such an indictment. What’s good for the ape is not what’s good for the Roman, or whatever follows him. Who is to say what will be good for the techno-fused transhuman future things we might become? Maybe they will think slavery beats whatever came before! But it is hard to escape the nagging thought that we are passing the moment in the story Flowers for Algernon where Charlie is at peak intelligence, tipping over onto a great downward slope—but as a species.
How could a free politics survive in a world where fewer and fewer people deserve or even desire their freedom? What is to be done?
A world in which LLMs are increasingly relied on to interpret the world and to formulate our own thoughts in return is a world in which the LLMs and those who manage them are going to be enormously powerful. AI may never become a true superbeing, but we may choose to make it our master anyway. Thus, for all the misgivings detailed here, the Trump administration’s agenda for AI dominance is not misguided. The only thing worse than increasing de facto subservience to various LLM models would be subservience to such models controlled by China—TikTok, but doing what’s left of your thinking for you. And this is not to speak of the various other applications of AI in military or research domains which seem far less problematic than the potential impacts of LLMs on their human users. Such tools—the technology that controls a drone swarm, for example—are neither good nor bad in themselves. Such things simply must be mastered by Americans.
But is that it? Must we be rushed into what could be a fundamental change in the way we are human without more reserve, more caution, more consideration? There is one more needed measure, the most needful of all. There should be spaces, especially for the young, for the cultivation of free minds. The effort to ban phones in schools indicates the coming backlash to the digital revolution broadly, as does the apparent surge in sales by the company that produces those old exam blue books you may remember.
The small liberal arts college, as bleak as things have seemed for such institutions recently, could be in the early moments of a renaissance. It is the kind of place where high walls against the digital revolution could be built, and a space free (at least for classroom purposes) of LLM-dependence, a space where in-class writing and oral examinations and bans on screens, among other common sense measures, prevail at the institutional level, and attract a kind of student who is excited by the romance of thinking for himself. Perhaps the proliferation of such places will be enough to overcome the problems sketched above, and to provide a fundamentally healthy reaction to the revolutionary trend (as opposed to less healthy reactions, like radical asceticism and Unabomber-style terrorism, the markets for which seem quite undervalued at the moment!) And if not … at least the students in such rare and happy places will give evidence that they are prepared for the only job that remains: master.