Why would we default to the negative notion that the latest step forward in the digital revolution is surely a terrible misstep? To quote Thomas Babington Macaulay, the 19th-century English historian and Whig politician, “On what principle is it that with nothing but improvement behind us, we are to expect nothing but deterioration before us?” While past performance may not guarantee future success, it’s not a bad starting point for analysis.
Yes, some argue that society is worse off thanks to computers and the internet—always “on call” for work, privacy eroded by algorithm tracking, and children glued to screens. These may be real downsides, but hardly proof that cyberprogress diminishes human experience. The massive upsides deserve more weight: Digital technologies have powered tremendous growth. America’s “digital economy” added $2.6 trillion to the GDP in 2022, with e-commerce sales alone expected to hit nearly $1.5 trillion in the U.S. and $6.4 trillion globally in 2025, supporting millions of jobs. App developers, data scientists, and cloud engineers are just a few of the occupations that did not exist a generation ago, while real wages for typical U.S. workers are 40 percent higher than before the internet. We’re not living in a dystopia for workers.
The scientific impact is equally striking. For example, sequencing the human genome became feasible only with advanced computing. When COVID-19 emerged, researchers posted the virus’ genetic sequence online within days. Drawing on that information, Moderna used its mRNA platform to design a vaccine candidate in just 48 hours, and within a year, shots were saving an estimated 14-20 million lives in 2021 alone.
Digital tools also enabled globalization: Email, spreadsheets, and online payments made complex supply chains possible, helping lift more than a billion people out of extreme poverty since 1990. Even much-maligned social media reveals its value with a bit of digging. One study finds Americans would demand $40 to $50 a month to give up Facebook and $17,000 a year to abandon search engines—proof that people value these services enormously despite their flaws.
AI now enters as the next digital general-purpose technology—a category of inventions that includes electricity, the internal combustion engine, and the internet itself. Such GPTs (not to be confused with ChatGPT) boost efficiency across many sectors and act as platforms for further innovation. Indeed, one of the chief legacies of the PC-internet stage of the digital revolution has been to lay the foundations for generative AI—systems that learn from data to create new content (text, images, code, or music) resembling human output. Ever-faster processors, abundant cloud computing power, and the vast troves of data generated by online activity have made today’s breakthroughs possible, as seen in ChatGPT and other large language models.
“Each wave of technology has unsettled societies, produced moral panics, and generated problems both anticipated and unexpected. Each has also expanded human flourishing as measured by prosperity, longevity, and opportunity.”
James Pethokoukis
“A world in which LLMs are increasingly relied on to interpret the world and to formulate our own thoughts in return is a world in which the LLMs and those who manage them are going to be enormously powerful.”
Aaron MacLean
Companies are plowing hundreds of billions into AI chips, cloud infrastructure, and model training. The business bet is simple: AI will prove at least as significant as the PC-internet revolution—and perhaps more so if the technology equals a “country of geniuses in a datacenter,” in the words of Dario Amodei, CEO of Anthropic, creator of the Claude chatbot.
Forecasters, from Wall Street to Silicon Valley, suggest AI could spark America’s sluggish productivity growth—the output workers generate, and the ultimate engine of higher living standards. Long-run U.S. growth forecasts hover near an anemic 1.8 percent a year, weighed down by slow productivity and workforce growth. If AI adoption by business is broad, and if firms invest in complementary skills (such as data-literate managers and workers trained to analyze AI output) and new business models (like subscription services built on AI personalization or AI-driven product design), growth could plausibly accelerate by 0.3-0.5 percentage points. That may sound modest, but over decades, the compounding effects are enormous. A 2025 analysis of Congressional Budget Office projections finds that a 0.5-point productivity lift would shrink America’s projected debt-to-GDP ratio by about 40 percent by midcentury.
A high-adoption scenario could raise trend growth toward 3 percent, or roughly average U.S. economic growth since World War II. Even more tantalizing is the prospect that AI might not just increase worker efficiency but also accelerate innovation itself: new drugs, new materials, new energy sources. If AI helps crack nuclear fusion, extend healthy lifespans, or discover new supermaterials, its contribution will dwarf even the most optimistic macroeconomic models.
Economic growth is too easily caricatured as mere consumerism. Yet growth for growth’s sake is hardly a bad strategy for increasing human flourishing. Economists Charles Jones and Peter Klenow, in their paper “Beyond GDP? Welfare across Countries and Time,” construct a broader welfare index that incorporates not just consumption, but also life expectancy, leisure, and inequality. Across 134 countries, their welfare measure and GDP per person move in near-lockstep, with a correlation of 0.95. While GDP growth is not a perfect measure, it is closely tied to broader human flourishing.
Growth also has a vastly unappreciated moral dimension. In his book The Moral Consequences of Economic Growth, Harvard University’s Benjamin Friedman argued that prosperity fosters tolerance, openness, and democracy, while stagnation breeds intolerance and retrenchment. American booms in the late 19th and mid-20th centuries coincided with expansions of civil rights, immigration, and social programs. Downturns in the 1890s and 1970s were marked by backlashes against minorities and welfare.
More recently, the sluggish recovery from the global financial crisis provided fertile ground for populist nationalism. A 2022 study led by MIT’s Daron Acemoglu reinforced the point: Support for democracy is strongest where it delivers growth and stability. If AI succeeds in reigniting dynamism, the benefits will be not only material but civic.
Global poverty statistics tell an even more persuasive story. Economist Max Roser of Our World in Data calculates that 85 percent of humanity lives on less than $30 per day in purchasing-power terms. In Denmark, the figure is 14 percent. Bridging that gulf requires massive growth. Roser estimates that to reduce the global poverty rate to Denmark’s level, the world economy would need to expand fivefold. That demands decades of broad-based growth of the kind that digital technologies have repeatedly enabled. AI, if it proves even a modest productivity booster in advanced economies, will be indispensable as a poverty fighter. AI worriers in the rich world seem to miss that.
Perhaps the comparison of AI to the previous stages of the digital revolution falters once the conversation shifts to artificial general intelligence, machines capable of matching or surpassing human cognition. Unlike PCs or the internet, such superintelligent systems would not just augment human effort but could supplant it, potentially across the entire economy. This is the logic behind visions of a “technological singularity.” In a 2021 Open Philanthropy analysis, Tom Davidson argued that if supersmart machines can fully stand in for scientists, a virtuous cycle of warp-speed growth could begin as AGI-generated ideas would beget more growth and even smarter machines. Rinse and repeat. The result, Davidson argues, might be not incremental gains but explosive ones, lifting productivity growth from today’s 2-3 percent to double digits.
In this scenario, rapid disruption might outpace society’s ability to adapt. But achieving GDP growth of, say, 30 percent annually, isn’t just about plugging some numbers into a theoretical economic model. The constraints of the real world have a vote. Benjamin Jones of Northwestern University injects some sobriety here. To sustain, say, 30 percent growth year after year for a quarter-century, as Davidson suggests is possible, would mean AI advances that improve human welfare a thousand-fold—more than all of history’s combined inventions. The churn of creative destruction would be overwhelming, with political backlash inevitable.
Economist Dietrich Vollrath adds that if incomes grew so quickly, many might simply work less, reducing measured output growth. Even the singularity wouldn’t be so simple to forecast as it interacts with the needs, desires, and fears of its human creators.
Then again, civilization does not need warp-speed growth to achieve what might appear to us today like a sci-fi future. Even a sustained half-point boost to productivity growth would yield vast improvements in human welfare over time. Science-fiction writer Vernor Vinge, who coined the term “technological singularity,” also imagined “the long, good time”: an extended golden age powered by highly capable but not superintelligent machines. In this world, AI amplifies human effort, accelerates medicine and science, and extends lifespans—without rendering humanity obsolete. Population stabilizes, education flourishes, civic cooperation expands. Growth compounds steadily, delivering abundance without upheaval.
“To undermine AI progress today on the grounds that it might one day evolve into a harmful superintelligence is to confuse speculative risk with present opportunity. It would be akin to banning CRISPR gene-editing for comic-book fears of genetic superhumans, even as it promises near-term cures for cancer and rare diseases.”
Concerns about superintelligence someday should not be dismissed. Such systems would represent a discontinuity unlike any seen before. Alignment with human needs and values, along with control and governance, would pose formidable challenges. But when? The current vibe seems to be later rather than sooner, but nobody really knows. To undermine AI progress today on the grounds that it might one day evolve into a harmful superintelligence is to confuse speculative risk with present opportunity. It would be akin to banning CRISPR gene-editing for comic-book fears of genetic superhumans, even as it promises near-term cures for cancer and rare diseases.
The likeliest path for the foreseeable future would make for a boring sci-fi film. AI will diffuse unevenly across the economy. Gains will come first in sectors rich in data and language: software, marketing, finance. Later, they will spread to manufacturing, logistics, health care, and education. A technological take as old as time, or at least as old as the Industrial Revolution: Bottlenecks, rent-seeking, and regulation will slow adoption. Yet history suggests that as costs fall and capabilities improve, diffusion accelerates. Electrification took decades to transform factories; the internet needed 15 years to remake commerce. AI’s full impact, too, will take longer than enthusiasts expect.
The prudent bet, then, is not on apocalypse but on acceleration of the sort familiar to economists. AI is the latest chapter in a long story: of steam engines and railways, electricity and telephones, antibiotics and vaccines, computers and the internet. Each wave of technology has unsettled societies, produced moral panics, and generated problems both anticipated and unexpected. Each has also expanded human flourishing as measured by prosperity, longevity, and opportunity. What more can you ask of a tech tool? There is little reason to think AI will be different in kind anytime soon.
Progress never happens without a dollop of peril. But to refuse the risk, as happened with nuclear power, is to guarantee stagnation. Better, with eyes open, to seize the gains and manage the downsides as they emerge. Macaulay would surely approve.