
A debate about the role that artificial intelligence should and will play in society, and how it will affect humanity for both good and for ill, is currently underway. At the same time, a larger, potentially more consequential debate looms—whether humanity should seek to prevent the ever-advancing capabilities of AI from evolving into artificial general intelligence (AGI), and eventually some form of superintelligence. Some experts believe this step is impossible; others think it imminent.
In reality, no one knows how to define what AGI is, whether it is possible, and if it comes to exist, whether humans can control it or survive its advent. Yet this uncertainty does not absolve policymakers from considering the risks AGI might pose, and taking precautions against worst-case outcomes before it is too late.
Debates over AGI quickly veer into the philosophical, but practical decisions about how to approach the pursuit of the technology are essential if humanity is to prepare for its possible development. Artificial general intelligence typically refers to an AI system capable of performing a wide range of cognitive tasks at or above human expert levels. Such systems could in theory modify their own algorithms and architectures, enabling a potential acceleration of their capabilities through what is known as recursive self-improvement. This, in turn, could lead to a rapid transition—possibly before human beings even recognize it is happening—into a “superintelligence” system more capable than every human at, well, everything. Such a system could grant its designer enormous power, but could also escape human control entirely.
“Any technology smarter than humans may be capable of breaking free of the control systems humans have designed, allowing it to pursue actions that are not fully aligned with the best interests of humans individually or humanity collectively.”
Jon B. Wolfsthal
“AI is not a mystical force; it is knowledge applied through computation. It magnifies human creativity, insight, and compassion. Just as a microscope revealed life invisible to the naked eye, AI can reveal patterns and solutions that no person could find alone.”
Daniel Castro
The concern that this technology might be possible and that whoever achieves it first could gain a strategic and permanent advantage has sparked a new AGI race, creating calls for everything from an AGI “Manhattan Project” to a nuclear deterrent-like policy of military preemption to ensure no one can build such a capability. This split is often cast as a debate between optimists and doomers. But this oversimplistic framing bypasses the more thoughtful and nuanced discussion needed to consider the risks and benefits of AGI, and in turn to weigh what prudent steps we could take now to guard against negative outcomes while preserving the promise of technical advancements. In this, the starting position should be one of humility, because no one knows whether AGI is possible and what its impact on humanity will ultimately look like.
If one comparison dominates the current period of AI technical advancement, it is the dawn of the nuclear age. It’s easy to see why: The temptation to be first and fear of being last create strong incentives for countries to outpace their competitors in the sprint to develop AGI. Given these parallels, it’s prudent to consider the warnings of early nuclear pioneers, including J. Robert Oppenheimer, who feared that his invention could undermine global security yet nevertheless understood the allure of being the first to develop the atomic bomb. In a November 1945 speech before the Association of Los Alamos Scientists, Oppenheimer recalled the motives driving major players in the nuclear race:
[I]t may be helpful to think a little of what people said and felt of their motives in coming into this job. … There was in the first place the great concern that our enemy might develop these weapons before we did, and the feeling—at least, in the early days, the very strong feeling—that without atomic weapons it might be very difficult, it might be an impossible, it might be an incredibly long thing to win the war. … Some people, I think, were motivated by curiosity, and rightly so; and some by a sense of adventure, and rightly so. … And there was finally, and I think rightly, the feeling that there was probably no place in the world where the development of atomic weapons would have a better chance of leading to a reasonable solution, and a smaller chance of leading to disaster, than within the United States. I believe all these things that people said are true, and I think I said them all myself at one time or another.
We can hear the echoes of his words in the current discourse surrounding artificial intelligence. Of course, analogies are imperfect as there are always an infinite number of variables, and no two situations are truly identical. But there are lessons from the early nuclear age that can be applied when assessing how to manage the development of AGI.
Unlike early debates over nuclear weapons, the fight over AGI is playing out publicly, as the technology’s development is largely pursued by private entities rather than the government. This means that it is harder to know the creators’ end goal, and the means to manage or control it domestically, let alone internationally, is also less certain.
The debate over whether and how fast the U.S. and humanity collectively should pursue artificial general intelligence is vital, not because AGI will be our doom but because AGI might be our doom. No one—not the early pioneers of AI, the heads of the world’s frontier labs, academics, policy experts, or even philosophers—can predict when AGI might be created or whether it will lead to human salvation or the end of our existence.
Any technology smarter than humans may be capable of breaking free of the control systems humans have designed, allowing it to pursue actions that are not fully aligned with the best interests of humans individually or humanity collectively. Eliezer Yudkowsky and Nate Soares clearly lay this point out in their new book, If Anyone Builds It, Everyone Dies: To assume we can know how AGI will act is to ignore the lessons of our own history and evolution. Humans have repeatedly overestimated their own ability to create control systems and predict how new entities will behave in new environments. Having gone to school in Atlanta, I often cite Kudzu as an example of this phenomenon: The plant, introduced into a new environment to prevent soil erosion, swiftly outmaneuvered all attempts to control it and is now widely referred to as the plant that ate the South.
Some AI developers and experts remain very concerned about how AI will evolve and whether the end result will be catastrophic harm. This paradox, in which the creators of a technology worry about its risks but pursue it anyway, is not new. Oppenheimer is just one of many examples. When I worked in the White House, the inventors of the technology used in AI-powered autonomous weapons appealed to President Barack Obama to ban these systems lest they become battlefield realities. A decade later, their concerns have come to pass. Likewise, just a few years ago, leading minds in the development of AI sought a full moratorium on AGI. But it is clear now, as it was then, that such calls are neither politically nor practically viable.
The potential benefits—and yes, profits—from artificial intelligence are simply too great to resist. The promise of what AI can provide in areas of medicine and health care, education, agriculture, and science is nothing short of astounding. And that promise has led to hundreds of billions of dollars in investments in AI, sparking concerns of a bubble but also creating a new generation of multimillionaires. Meanwhile, the tech industry’s struggle against those who seek to regulate this fast-emerging technology continues.
Yet America does not allow biologists to develop new products unless they are contained in high-security labs. Auto companies cannot sell cars unless they are tested for safety. Airplanes do not fly unless the Federal Aviation Administration has determined they are safe. So why should a private industry—even one as promising and wealthy as AI—be empowered to develop a far more consequential technology without adopting reasonable precautions under government guidance?
There is no silver bullet to ensure that if AGI comes into existence, or evolves into a superintelligence, that it will be safe for humanity. But right now, even as some pursue AGI, there is no clear or comprehensive process of regulation, testing, and evaluation, or even normative controls to ensure that frontier systems are contained and will only be pursued in ways that cannot engage in self-improvement. At a minimum, it would seem reasonable for the government (federal and state) to ensure that companies with certain high-end capabilities adopt constraints integrated into the models they are developing, as well as physical and operations controls to short-circuit any AI evolution that does not conform to safe behavior. These so-called intrinsic and external constraints will both be needed to safeguard against a frontier model potentially breaking the bounds of human control.
Of course, it is reasonable to worry that should the United States adopt controls or slow its AGI work, it could fall behind its adversaries in the artificial intelligence race. Ideally, such controls would be adopted globally—and especially on a bilateral basis with China. Yet this discussion has largely centered not on adopting mutual safeguards, but on competition. Even here, there seems to be a lack of a detailed cost/benefit analysis concerning the potential risks associated with humanity creating and losing control over AGI. Is this risk greater or less dire than letting another country win the race? Should the U.S. government spend at least as much time working to engage China on the risks of AGI as it does cheering on the companies that seek to achieve it?
No one knows if efforts to build a global consensus on AGI controls would be successful. But if they are not, U.S. leaders will face a choice: Is America going to pursue AGI out of both fear and a sense that it is better for the technology to be developed by the U.S., even if it leads to global and irreversible consequences? If that is the choice, at the very least leaders should ensure that attempts to get off the race track and seek de-escalation with their competitors tried and failed.
There is some evidence that Chinese officials and scientists are aware of the risks AGI poses. They may, in fact, fear us and fear falling behind as much as we do them. This is one of the lessons of the nuclear Cold War with the Soviet Union: Washington was convinced its adversary sought nuclear superiority, only to discover after the Soviet Union collapsed that it was sure the U.S. was seeking the same. In the end, neither side achieved superiority, but both took on massive security risks that, in hindsight, brought us closer to catastrophe than even our own leaders realized.
America may eventually create AGI and even superintelligence. The temptation to be first, as the economic incentives to race ahead, may in the end be too great to resist. But doing so without examining the risks and taking every prudent safeguard ignores the lessons of history and puts all of humanity at risk.
















