In a wide-ranging interview on the future and global existential risks, billionaire technology investor Peter Thiel raised alarms not only about familiar threats like nuclear war, climate change, and artificial intelligence but also about what he sees as a more insidious danger: the rise of a one-world totalitarian state. Speaking to the New York Times’ Ross Douthat, Thiel argued that the default political response to global crises—centralized, supranational governance—could plunge humanity into authoritarianism.
Thiel, co-founder of PayPal and Palantir, shared his worries using examples from dystopian sci-fi stories. “There’s a risk of nuclear war, environmental disaster, bioweapons, and certain types of risks with AI,” Thiel explained to Douthat, suggesting that the push for global governance as a solution to these threats could culminate in a “bad singularity” – a one-world state that stifles freedom under the guise of safety.
Thiel critiqued what he described as a reflexive call for centralized control in times of peril.
“The default political solution people have for all these existential risks is one-world governance,” Thiel observed, pointing to proposals for a strengthened United Nations to control nuclear arsenals or global compute governance to regulate AI development, including measures to “log every single keystroke” to prevent dangerous programming. Such solutions, the investor warned, risk creating a surveillance state that sacrifices individual liberty for security.
Drawing on historical and philosophical analogies, Thiel referenced a 1940s Federation of American Scientists film, One World or None, which argued that only global governance could prevent nuclear annihilation. Thiel juxtaposed this with a Christian theological framing: “Antichrist or Armageddon?” In both, the billionaire said he sees a binary choice between centralized control and catastrophic collapse. Yet, Thiel questioned the plausibility of a charismatic “Antichrist” figure seizing power through hypnotic rhetoric, as depicted in apocalyptic literature. Instead, he offered a modern twist: the path to global control lies in relentless fearmongering about existential risks.
“The way the Antichrist would take over the world is you talk about Armageddon nonstop,” Thiel explained. The billionaire contrasted this with earlier visions of scientific progress, like those of 17th- and 18th-century Baconian science, where the threat was an evil genius wielding technology. Presently, Thiel argued, the greater political resonance lies in halting scientific advancement altogether. “In our world, it’s far more likely to be Greta Thunberg than Dr. Strangelove,” he quipped, invoking the radical Swedish climate activist as a symbol of anti-progress sentiment.
On AI specifically, Thiel struck a balanced tone, tempering both utopian and apocalyptic predictions.
“One question we can frame is: Just how big a thing do I think AI is?” he asked himself. “My stupid answer is: It’s more than a nothing burger, and it’s less than the total transformation of our society.”
Thiel compared AI’s potential impact to the internet in the late 1990s, suggesting it could create “some great companies” and add “a few percentage points” to GDP, perhaps boosting growth by 1% annually for a decade or more. However, the billionaire expressed skepticism that AI alone could end economic stagnation, viewing it as a significant but not revolutionary force.
While Thiel expressed nuanced views on artificial intelligence, his venture capital firm, Founders Fund, is aggressively backing the technology. Namely, it recently led a $600 million investment in Crusoe, a vertically integrated AI infrastructure provider.
“The biggest risk with AI is that we don’t go big enough. Crusoe is here to liberate us from the island of limited ambition,” Thiel said at the time.
Loading…