Featured

The AI Revolution Will Democratize the Means of Economic Production

“The production of too many useful things results in too many useless people.” That was Karl Marx, but it could as easily have been Dario Amodei, CEO of Anthropic, one of the world’s leading AI companies. Amodei predicts that AI will displace half of all entry-level white-collar jobs, and he is not alone in thinking so. Geoffrey Hinton, the “godfather” of AI, warns of massive unemployment. Bill Gates believes that AI will make most jobs obsolete. Even Elon Musk thinks that work will eventually become optional.

Technological revolutions are always disruptive. But the AI revolution could herald unprecedented disruptions. Unlike the personal computer, which has enhanced work, AI will almost certainly be able to replicate many tasks. That could eliminate entire job categories—and not just at the lower end.

Ever since the Luddites took to smashing wool looms in Wellington’s day, every major technological revolution has been attended by dire warnings of mass unemployment. In the long run, the benefits of technology have always outweighed the losses—even for workers. Technology increases labor productivity, which increases investment and wages, which increases demand, which increases employment. It is a paradox as old as Homo habilis: Tools are the greatest job creators.

To be sure, advanced societies are marked by both leisure and idleness. Leisure, a luxury of the wealthy, is perhaps an inevitable result of technology. But idleness—especially the kind that so often afflicts those whom progressive policy has deprived of the need to work—is not. It is, rather, the inevitable result of socialism.

If history is any guide, the real danger of the AI revolution is that progressives will seize the surplus it creates to further entrench their program of government. Since the end of the Gilded Age, progressives have sought to replace gainful work with aimless dependency on their redistributed largesse. And sure enough, Amodei proposes to redistribute 3 percent of all AI income through a tax on AI. And that AI tax is just the start: Amodei thinks that a universal basic income will “only be a small part of a solution” to the disruptions of AI and, implicitly, only the start of what the government should do about it.

>>> A Federalist Approach to AI Policy

The Civil War’s cannons had barely gone silent when a massive expansion in America’s railways began. From 1870 to 1900, railway miles almost quadrupled, from 53,000 to more than 190,000. Out-of-work teamsters blamed the “monster machines.” But it was soon clear that railroads were creating a lot of jobs. Laying track and building locomotives, job categories that barely existed a few years earlier, saw stupendous growth.

Niall Ferguson has compared the data-center boom to the railway boom. Worried that AI companies, like many Gilded Age rail lines, lack a clear path to profitability, he sees a speculative bubble that could soon burst, as happened during the Panic of 1893.

But the AI revolution is much bigger than the data-center boom, just as the so-called Second Industrial Revolution was much bigger than the railway boom. Take steel production as the metric, and the Panic of 1893 was barely a hiccup. Carnegie Steel was formed in 1892. Less than a decade later, in 1901, Andrew Carnegie sold it to create the world’s first billion-dollar company: U.S. Steel. During those years, American steel production doubled to 12 million tons—surpassing that of Germany and Britain combined—and just kept rising, reaching 120 million tons in the 1970s.

In fact, the Gilded Age saw two major depressions, in 1873 and 1893. But between 1870 and 1900, labor productivity and wages soared. America’s labor force amazingly doubled in size, despite—and in some ways because of—the massive immigration wave happening at the same time.

The scale of the transformation becomes clearer when you widen the lens to include electricity and telephones. Widespread electrification during those years quickly led to automation on the factory floor and to the widespread use of new inventions such as the light bulb and telephone. Often using the same wires as the railroads’ indispensable telegraphs, a vast array of telephone networks sprang up to permit instantaneous transmission of voice communications, which further revolutionized both commerce and daily life.

Indeed, railway workers soon discovered that they had paved the way for an even more transformative innovation then taking root among the carriage factories and shipyards of Detroit, one that would leave many of them out of work. In 1900, American cities were still filled with horses pulling carts and carriages. “Livery stables,” where you could park your horse for the day, were everywhere, along with ferriers and feed suppliers. A decade or two later, they were all gone. By 1940, Chicago alone had more than 600,000 automobiles.

With automation in agriculture, labor employment plummeted. Driven by automobiles and the new factories’ surging demand for labor, America became a society of bustling urban centers. The transformation from a rural society to a largely urban one happened in just a few decades. It was another painful transition, with communities uprooted and unemployment soaring to 25 percent during the Great Depression. Albert Einstein reportedly blamed “man-made machines.” John Steinbeck’s The Grapes of Wrath, perhaps the most consequential American novel of all time, seemed to inscribe Franklin D. Roosevelt’s enmity toward “damaging competition” into America’s DNA.

It was an enduring sentiment. At the dawn of the internet, many predicted that clerical and retail jobs would vanish. Of course, as readers of a certain age will readily recall, the 1990s delivered one of the strongest labor markets in U.S. history, with unemployment dropping to almost 4 percent—driven once again by the labor productivity gains made possible by innovation.

The Age of AI is poised to deliver the greatest democratization of the means of economic production in human history. It will put the resources of large corporations in everyone’s hands. If the PC made desktop publishing possible, the AI revolution will make desktop filmmaking possible—along with scientific and medical discoveries, app development, and supercharged research into a dizzying number of fields, all powered by an endless supply of virtual research assistants. “Even if an AI never exceeds the reasoning capability of a smart human graduate student,” writes AI futurist David Shapiro, “the ability to spin up 100 trillion instances of that student and run them at 100 X real time is a force multiplier that human intuition struggles to comprehend.”

Just as the railroad was prologue to the age of automobiles, the data-center boom is almost certainly prologue to the age of robots and the infinite possibilities of AI on “edge devices.” It’s already possible to run compact open-source models on your PC at home without even having to connect to the internet. Such models, on which China has taken an ominous lead, normally have fewer than 10 billion parameters—a tiny fraction of the trillion parameters in Google’s Gemini 3. They are free to download, use, and modify, promising to put powerful AI tools in the hands of every man, woman, and child with a device of even modest capabilities.

Similarly condensed AI models already provide the “end-to-end neural networks” that Tesla’s Full Self-Driving vehicles and Optimus robots use to respond to visual and other sensory data with appropriate and nearly instantaneous motor commands. Such vision-language-action (VLA) models are similar to large language models (LLMs) in that they are all based on Google’s Transformer architecture. First introduced by Google’s 2017 paper “Attention Is All You Need,” this world-transforming innovation replaced the sequential processing of previous neural networks with “self-attention,” unlocking the potential of massive parallel computation to train general-purpose models on vast datasets—whether text, images, or robotic movements. That in turn drove the explosive demand for the parallel-processing units used in AI data centers, making Nvidia, a pioneer of those units, the world’s most valuable company.

Led again by Google, researchers are now actively working on systems that combine the reasoning capabilities of LLMs with the sensory-and-decision processing of VLAs to create fully functional robots that can talk and make smart decisions about what to do next. These compact LLM-VLA combinations will be able to run all sorts of agentic appliances: smart devices capable of reasoning, planning, and acting autonomously to carry out their owners’ commands. That will lead to a Cambrian explosion of new AI uses and tools. Business models and products that we can’t even imagine today will become globally dominant, just as the smartphone did two decades ago. The potential gains in labor productivity are astronomical.

Dario Amodei sees a big difference between AI’s potential to enhance human work and its potential to replicate it. But a machine is just a power tool, and a machine that can be trusted to make decisions is just a very useful power tool.

“Capitalist production,” writes Marx in Capital, “develops technology . . . only by sapping the original sources of all wealth—the soil and the laborer.” The Austrian economist Joseph Schumpeter had a much better explanation: The allocative efficiency of free markets is a powerful force of “creative destruction” that can produce endless gains in social wealth. Still, Schumpeter thought that socialism was inevitable—not because capitalism would fail, as Marx argued, but because it would succeed too well. In his seminal Socialism, Capitalism, and Democracy, he predicted that the capitalist entrepreneur would give way to the bureaucratic monopolist, who would then give way to the leisurely intellectual insistent on advancing government control of the economy for redistributionist ends.

Amodei could perform all the parts in Schumpeter’s play, a one-man show for the AI revolution. In his essay “Machines of Loving Grace,” Amodei (who did his Ph.D. in biophysics) waxes futuristic on the potential of AI “to perform, direct, and improve upon nearly everything biologists do.” But his company, Anthropic, has quickly established a reputation for supporting virtually any mid-20th-century progressive policy that results in a government-created cartel for its benefit, such as restrictions on the ability of its main GPU suppliers to sell to anyone else. In Amodei’s few idle moments, he ponders the potentially devastating impact of AI on the labor force: In an interview earlier this year, he said that AI could push unemployment to 20 percent in the next five years.

Something is happening. Based on data of the payroll-processing firm ADP, one often-cited Stanford study showed that employment for workers in their early 20s in high-AI-exposure occupations (such as software development or customer service) declined by roughly 13 percent after the release of ChatGPT, while employment among older workers remained stable. That suggests that AI is already displacing the tasks normally performed by junior employees in white-collar sectors, thereby threatening traditional career progression for even the highly skilled. However, a more comprehensive study by the economists Andrew C. Johnston and Christos Makridis found that high-AI-exposed occupations experienced significant gains in both wages and overall employment, even among college graduates—suggesting AI has, so far at least, tended to complement rather than substitute for human labor.

Brent Orrell of the American Enterprise Institute sees a wave of “de-skilling” where the automation of mid-level tasks like coding and data-processing creates an “up or out” dynamic, forcing workers to either manage AI systems or face downward mobility. But as technical and routine cognitive skills become commoditized by AI, says Orrell, “people skills,” emotional intelligence, and critical thinking are becoming the new premium, with high-value creative tasks taking precedence over routine ones that can be performed by AI. That suggests AI is also likely to increase the quality of employment, making it possible for more hours to be spent on creative and managerial work.

One study of a Swedish program that provides grants to firms for AI adoption found that those firms increased their job postings, particularly for white-collar jobs, without any net decrease in employment over five years. Another study of 200,000 conversations with Microsoft’s Copilot found that users almost invariably use AI to assist with complex workflows, with few instances of AI substituting for human labor.

That is no surprise to Joel Mokyr, the winner of this year’s Nobel Prize in Economics, who has spent decades studying why the Industrial Revolution of two centuries ago has resulted in self-sustaining growth to this day. The Northwestern University professor’s explanation centers on what he calls “useful knowledge,” which comprises both the understanding of processes in the physical universe and practical descriptions of what is necessary for a tool to work. This makes sense even in evolutionary terms: It took Homo habilis to create Homo sapiens.

The most useful knowledge in economic history has been knowledge that transforms some potential energy source into kinetic energy. It was by discovering how to translate the nearly infinite energy potential of coal into useful work that the scientific revolution led directly to the Industrial Revolution. The combination of useful knowledge and reliable energy has made the industrial era one of continuous transformation. Combining advanced knowledge with staggering amounts of electricity, the AI revolution is a supercharged version of those earlier industrial revolutions.

Consider for a moment that the operating systems and computing devices central to most of our lives, from smartphones to laptops, are all physical versions of the universal Turing machine, which was theorized almost 90 years ago and has now been reduced to billions of yes/no logic gates formed by impossibly small transistors on a microchip. Now ponder that the substrate for our entire digital ecosystem is no longer simple yes/no logic gates but rather self-training computer models that can interpret and execute virtually any command, and you get a sense of the great leap forward that is happening at this very moment.

>>> States’ Pushback Against ESG Finance Contains Key Lessons for Powering AI

It is true that industrial progress has relieved workers of the need to toil long hours just to survive. But the persistent drop in labor force participation over the past several decades—with entire social classes put out of the labor force entirely—is only indirectly a result of industrial progress. One canary in the coal mine is the labor force participation rate among the least-skilled 20 percent of men, which peaked in the 1950s and has steadily declined since, driven down by the Great Society’s enormous expansion of welfare.

Drive through northeast Washington, D.C., or the South Side of Chicago and ponder: What force can cause such an incredible abundance of human potential to be left in such a state of idleness? The answer is not technology but welfare programs and minimum wage laws that deprive those who most need social mobility of the right to work and learn on the job—replacing opportunity with welfare dependency. Progressives drive through those neighborhoods and do not see the problem that they have created, because for them, dependency is not a problem.

Awondrous new age is dawning, straight out of science fiction, an age of talking automobiles and thoughtful robots. Just as mastery of stone tools transformed Homo erectus (“upright man”) into Homo habilis (“handy man”) more than 2 million years ago, so mastery of robots could similarly transform our own species: Homo roboticus, perhaps?

But darker questions loom. As virtual assistants become indispensable helpers and confidants, we face a future in which AI displaces not only jobs but also human relationships. Adolescents already spend more time on their devices than engaging in face-to-face social interactions, with some of the most widely used apps, like TikTok, specifically designed to target young people with unlimited access to compulsive, instant gratification.

“The ability to access vast amounts of data and information should not be confused with the ability to derive meaning and value from it,” said Pope Leo XIV recently. “The latter requires a willingness to confront the mystery and core questions of our existence, even when these realities are often marginalized or ridiculed by the prevailing cultural and economic models.”

Just so, “work,” properly understood as intentional creation, is not the same as a “task” and cannot be automated. There will be painful disruptions along the way, but in the long run AI will make human work more valuable than ever. What Henry Ford wrote in the New York Times in 1939 is still true today: “A generation such as the present one which is technologically alert will always be employed.”

Source link

Related Posts

1 of 186