
Editor’s Note: The United States and China are firmly locked in a cold war struggle whose outcome will be the most important determinant of peace, prosperity, and quality of life in international politics over the next decade. This article is the third in a series that will propose a strategy for the United States to conduct and win this conflict, based on lessons from the Cold War with the Soviet Union. Read the previous entries in the series here.
Vladimir Putin was ahead of most global leaders when, in September 2017, he declared his belief that, “Artificial intelligence is the future … whoever becomes the leader in this sphere will become the ruler of the world.” Right or wrong, Xi Jinping (first) and Donald Trump (a bit later) now seem to have fully caught up to that general belief.
Asserting it is the easy part. To build a strategy to win a race you first have to identify the terms of competition: in Putin’s parlance, what it means to “lead” and what it means to be the “ruler.” Similar-sounding arguments were made about atomic weapons in 1945, but it played out differently: Multiple countries built and deployed extensive arsenals, and doing so came with zero guarantee of ruling the world (as the USSR, ironically, discovered most poignantly).
So a reasonable starting position is to hold off on Putin’s view and question whether AI is likely to evolve into a more conventional geopolitical contest with concepts like great powers, spheres of influence, and flashpoints. If so, how many AI great powers can the world support? In the first decades of the 20th century, geopolitical observers studying the Industrial Revolution commonly concluded that the U.S. and Russia were nearly certain to become global great powers, and that either Germany or England had the potential to join that exclusive club—but not both, because their economies and external territorial assets bumped up against each other’s to create a zero sum game between them. Determining which of the two candidates had a chance would take not one but two world wars; and in the end, neither actually made the cut.
Or perhaps AI will turn out to be more of a geoeconomics competition, where the prize is a degree of dominance over markets rather than governance of territory. If so, the terms of competition might sound more like an antitrust debate: How absolute is that dominance of market power likely to be, and can it be reversed by an upstart after market power is established? China certainly treated basic manufacturing as a geoeconomic game of this kind for decades and arguably “won” that competition—but not absolutely, not irreversibly it now seems, and not with the prize of ruling the world.
Just a little more than three years into the modern AI race, it’s too soon to be certain of how AI will reshape geopolitics. What we’ve witnessed over the last several years looks more like an amplified but still normal form of technological competition rooted in national objectives, with two steps forward and one step back, a short-term win for an American company or research lab followed by a short-term win for the Chinese, and back again.
More akin, in fact, to what we lived through with the World Wide Web. U.S. companies dominated web infrastructure and content early—and opened the door for the likes of Huawei, Alibaba, ByteDance, and others based in China to carve out competitive niches, in some cases extending to a massive scale with global reach. No one won the web decisively.
This “terms of competition” issue matters for national strategy because it determines what kinds of risks are worth taking, or are necessary to take. Some AI observers— particularly those who are partial to the “superintelligence” concept associated with artificial general intelligence (AGI)—believe in a radical first-to-the-post dynamic. That would mean we’re playing a zero-sum game in which falling behind even for a short time courts disaster. But if you bet on something more like normal dynamic competition with ebbs and flows, the risk calculation is less austere. That doesn’t mean that winning matters less—AI leadership may not be the only thing that matters in international competition, but it is very likely to be the single most important national power asset for decades or longer. Knowing that we do not yet know the answer to the terms-of-competition question with full certainty, we need to converge on some best-guess axioms that will guide strategy in a sensible way.
Axioms of competition.
Four baseline axioms should guide the debate around U.S. policy for AI.
The first is that China has no shortage of capital to finance its AI aspirations. And so working to restrict the flow of American capital into Chinese AI businesses, as the U.S. has been moving to do, is largely a feel-good measure. The Chinese economy does not depend on U.S. sources of capital to build these systems, and financing from abroad is not a rate-limiting step. There may be some limited impact from slowing down the transfer of intangible business knowledge and managerial experience that accompanies venture investments from the best American funds, but that is quite hard to assess and measure. Keep in mind that this is not the China of 1979 trying to learn manufacturing for a global market—it is the China of today with a domestic managerial class of its own that knows quite well how to run for-profit technology businesses.
The second axiom is that AI technology is inherently dual-use, with both commercial and military-security applications that are in practice impossible to segregate. The dual-use logic is indirect in that a significant productivity boost from a general-purpose technology that impacts sectors throughout the economy will create new wealth and some proportion of that wealth can be siphoned into a military budget, and direct in the sense that AI technologies are immediate force multipliers for military power. It’s not only about drones, autonomous weapons, self-executing cyberattacks, and so on. It’s also less flashy forms of analysis and decision support, as valuable to a general as to the CFO of a retail operation. Great power militaries will benefit as much or more from AI-enabled enterprise resource planning, business process automation, and all the other mundane but productivity-enhancing tools as banks, supermarkets, and factories. A 5 percent cost reduction for administering military procurement contracts is equivalent to a 5 percent budget increase, after all.
The third axiom is about energy to power data centers (really better described as AI factories). Americans often default to the idea that access to the highest capability compute—with the newest Nvidia GPUs as the icon—represents the winning advantage in AI factory expansion. In some technology trajectories that may turn out to be true, but not nearly in all. In fact, Chinese firms have shown it is possible to tie together a very large number of somewhat less capable processors to build factories that rival the performance of factories with a smaller number of the most advanced chips. The demands on energy are enormous in either case, and building electrical generation capacity to power AI factories may in practice be the most important ingredient. China has a distinct advantage in multiple dimensions here: its vast ongoing expansion of renewable energy, its aggressively modernizing grid, and its government’s ability to bend permitting and interconnection processes to a national strategic objective.
Fourth, this modern generation of AI is still in the very early days of what is likely to be a very long game. Here, good ideas are a rate-limiting factor. And if ideas live within AI scientists and engineers, China has the capacity to train and develop many more than does the U.S. For decades, American political economists and many technology scholars countered that stark numerical advantage, with the belief that U.S.-based research and development would always be a step ahead of what was possible in China, because a relatively unfree society could not be truly competitive at the discovery horizon. Simple observation is proving that belief wrong. China is by any reasonable measure ahead of the U.S. in multiple areas of advanced technology deployment, starting with electric vehicles and batteries, quantum communications, and high-speed rail. Democracy is no guarantee of scientific and technological leadership.
What do these axioms add up to? A robust competition that’s likely to go on for quite a while. Not one Sputnik moment (or, in more contemporary terms from 2024, DeepSeek moment) but many. No predictable target date or market indicator where one side declares victory. No single global stack of technology and applications. And again, probably no threshold moment where the world one day wakes up and finds a single center of AI power that has consolidated everything around it. This is why sustained logical strategy is so important, and the focus on competitiveness is critical.
Terms of competition.
“Scale is all you need” is a great simplifying phrase for business, politics, and media, but it is not a national strategy. It may not even be scientifically valid, since while the basic technology that underpins today’s most powerful language models has done remarkable things, there are increasingly prominent voices in the technical community who believe it is racing toward a plateau. There’s precedent for that in previous generations of AI technology where the research world coalesced around a particular approach or architecture started to encounter rapidly diminishing marginal returns that led to more than one “AI winter.”
To bet that this time is different and that today’s architecture is the right and final answer is really an act of faith. Even within today’s dominant approach, variations (like finely tuned small language models bespoke to particular substantive domains) under some conditions outperform large scale foundation models. The truth is that we just don’t know how it will play out—and neither do the Chinese.
Recognizing that sets the stage for disruptive innovation of precisely the kind that I described in the previous article in this series. Disruptive innovators in their early days typically underperform on standard metrics. Their products seem more like toys than serious substitutes for what leads—cheaper but less capable and unable to do everything that the best products can. What makes them threatening is that the disruptive innovator has found a steeper trajectory of improvement. And so with time, what was earlier a toy becomes a serious competitor, gains customers and adherents, and becomes the new leader. The problem is that the old leader usually can’t see what’s coming, because the less capable competitor is so easy to dismiss.
This logic is visible in today’s AI debates. In the U.S., for the moment, we tend to ask, “Why would anyone want to use a second or third best model, when you can have access to the most sophisticated leading model?” But a bank, hospital, or paint factory doesn’t necessarily need or benefit from the latest foundation model release. What almost all users need (and will pay for) is an advanced model that is good enough, well understood and characterized, and ready for deployment in the enterprise. The AI needs to connect to all the other systems it needs to complete workflows, with predictable and understandable outputs, high accuracy, and good interoperability.
What is true for AI deployment in businesses and organizations is true for countries and most parts of the military as well. The (somewhat fictitious) leaderboard scores for the latest model release are a distraction from the impact of AI on national power and the Sino-American competition. That competition is shaping up to be consistent with the historical experience of how general-purpose technologies (like the steam engine and electricity) have in the past transformed economies and the distribution of geopolitical power.
Put aside Sputnik and DeepSeek moments—it all takes time. There will be an ongoing process of deployment and discovery efficiencies throughout the economy, using a variety of models and hardware-software-data stacks that become more practical and likely more interoperable over time. Today’s leader might not be tomorrow’s and there is no permanent moat. National strategy needs to be oriented toward winning that kind of race, with disruptive innovation “from the bottom” always present in the mix.
A winning strategy.
What does that portend in practice? Following the template that I laid out from Cold War statecraft of the 1980s, a winning American strategy looks like this.
First, the U.S. should, in a clear and persistent manner, adopt the ambition to win, not to “balance” or reach an equilibrium. The current administration has put forward rhetoric to that effect in a blustery way, but it would do better to define and express its concept of victory as an ongoing market-control objective. To win here is to control as many elements as possible in the set of technologies that make up the AI ‘stack’, in as many markets as possible around the world.
It seems that Beijing has grasped that insight more fully than has Washington right now. We need to reverse that quickly. U.S. policy should develop multiple levers to incentivize broader and faster deployment of AI technology inside American companies, starting with tax, regulatory, and reasonable liability-limitation incentives that accelerate experimentation. Xi’s AI Plus initiative calls for AI to be used in 70 percent of China’s economy by 2027, and 90 percent by 2030. The U.S. needs to take those numbers seriously and surpass them. At the same time, we should be assisting (and subsidizing where necessary) AI technology exports beyond U.S. borders—possibly through a joint initiative by the Commerce and State Departments. Consider, for example, a 21st-century equivalent of the Peace Corps, with forward-deployed engineers working in emerging economies.
The second and related principle is rolling back China, not containing it. That implies unwillingness to accept being second-place in markets where Chinese companies like Huawei currently lead. There’s abundant evidence that behind China’s AI successes lies a sustained nationwide investment plan for research and development alongside a governance model willing and able to take a decades-long view of general-purpose technologies. As evidenced by its Made in China 2025 initiative, Beijing acts more consistently than does the U.S. on the idea that interlocking industrial clusters of research, supply chain development, and manufacturing reinforce each.
Not everything works, of course, and skeptics are right to point to the costs of overinvestment and what the Chinese call “involution” (a pathologically hypercompetitive struggle where too many firms pile into a market, compete destructively on price, and destroy margins). But the overall results speak for themselves —in sectors as diverse as photovoltaics, drones, e-bikes, high-speed rail, and now, most vividly, electric vehicles. Beijing is right to see these sectors as interconnected—developments in battery chemistry that power EVs will spread quickly and widely among energy storage, consumer devices, and ultimately to robotics, for example. Chinese companies have a head start and a first-mover advantage in much of the “electrification stack”—and that needs to be rolled back rather than accepted and contained. It would be common sense for the U.S. to adopt an “all of the above” energy policy that supports the expansion of wind, nuclear, solar, hydro, geothermal, and fossil fuels, alongside advanced initiatives in fusion and the like. The current administration, with its active hostility toward wind and solar in particular, is tying one arm behind its back and sacrificing the possibility of rolling back Chinese wins.
The third principle is to control downside risks associated with rollback, through concepts like escalation dominance (the ability to win at each level of competition) and offsets (using American strengths to counter Chinese weaknesses) that heighten the tensions and contradictions of the adversary’s trajectory. This is where the current debate over export controls for advanced GPUs in particular fits in. Trying to limit Chinese access to computing power and—upstream from that—advanced photolithography (the most critical component for state-of-the-art chip manufacturing) may have a short-term effect but does little or nothing in the medium term other than to enhance China’s incentives and determination to reduce dependence on the American ecosystem. On this point, Nvidia CEO Jensen Huang is absolutely right to say that getting a generation of Chinese AI developers “hooked” on the Nvidia hardware-software stack would strengthen the U.S. and weaken China. There’s no reason to sell the very top leading generation of chips to Chinese firms or anyone else outside the U.S. for that matter, but we absolutely should make chips one or two steps below the horizon too attractive to ignore.
The U.S. needs to double or triple down on cyberdefense as a critical component of escalation dominance. Chinese AI-driven attacks on U.S. digital and connected physical infrastructure are nearly certain to mount going forward. Cyberattack capabilities, from espionage to sabotage, weaken U.S. capacity across the landscape. There simply is no excuse for underinvestment in defense, which can be thought of as a force multiplier for offensive capabilities. Current Trump administration policies have not kept up, and that needs to change. At the same time, the U.S. should immediately drop H-1B visa restrictions and fees. The goal is simple: to make the United States the most attractive place by far for the world’s top AI talent. If some of that talent is Chinese, and some subset of that takes ideas back to China, the net benefit to U.S. dominance is still vastly positive.
Another need is to create and sustain an optimistic and globally oriented American leadership proposition that is based on ideas as much as coercion or market power. Put simply, AI soft power matters. It’s easy to gloss over just how much emotional tumult, uncertainty, and worry a technological transition of the kind we’re heading into is driving and will continue to drive for people around the world. The vast majority do not understand the technology; they do not have a basis for making sense of what it is doing to the world around them or themselves; and most immediately what it means for their well-being and specifically their jobs.
The utopian vision from Silicon Valley of vast wealth creation alongside unspecified growth of “new and better jobs” is not the solution to this soft-power gap—it is for many people precisely the problem. That is because it can be seen as self-serving “happy talk” coming from owners of capital, who gain from the upside but are largely insulated from the downside risk to labor. For a generation that lived through what might have been a parallel transition, the adjustment to Chinese manufacturers entering the U.S. after WTO accession in the early 2000s, the happy talk narrative feels eerily familiar. It’s not at all surprising, then, that Americans are among the most pessimistic populations in the world when asked about the impact of AI on their lives. That’s nearly the opposite of China, where 85 percent of respondents believe that AI-powered products and services will be more beneficial than harmful. Put simply, right now Americans fear AI and the Chinese do not.
This gap matters a lot for both economic and political reasons, on both the domestic and global stages. Resistance to technology deployment in companies, local restrictions and protests against data centers, political movements that focus resentment on job displacement and energy bills—all of these will place a significant drag on U.S. gains from AI. And they are hardly an attractive model for other countries to want to be a part of.
How to fix this? First we need a pragmatic AI safety regime that reduces catastrophic risk. AI makes mistakes, most of which are matters of annoyance and inconvenience. But there is also real and demonstrated potential for catastrophic harms—enabling bioweapons, advanced cyberattacks, all the way up to and including escape from human control. A single catastrophic event—imagine a pandemic generated by AI-engineered pathogens, or a global AI-enabled cyberattack shutting down financial institutions and markets for weeks—ould set AI development and acceptance back by years inside the United States (possibly less so in China). In 2025 two states (California and New York) paved the way with SB 53 and the RAISE Act, demonstrating that it’s possible to pass light-touch regulation for transparent security regimes that create minimal burden on the industry and won’t slow innovation. Critics don’t like these laws because they view any regulation at all as a constraint. That’s narrow-minded in the extreme. The federal government would do better to copy and make national the core provisions of these laws rather than to punish the states that have adopted them, or try to pre-empt them.
The U.S. also needs a national-level plan for how AI tools will impact the dissemination and manipulation of ideas. AI speech is increasingly outperforming human speech in persuasion. Left alone, this dynamic undermines the battle of ideas that John Stuart Mill believed would move people closer to consensus on what is true and false. That’s poisonous for democratic societies and an increasing drag on American power. The inverse is also likely—better truth discovery is a benefit to the U.S. and a detriment to autocratic China.
Lastly, return to the question of labor market impact. Earlier I critiqued the happy talk mantra that currently dominates U.S. discourse—that AI will make people more productive rather than replace them; that some jobs may be lost but more, new and better ones will be created; that so much wealth will drive a rising tide that lifts all boats. Let’s be—honest—much of this is wishful thinking. Jobs are right now being destroyed, and many more will be destroyed in the second half of the decade. Many of the lost jobs will be among the professional classes, what used to be called “symbols analysts”—writers, software engineers, etc.
AI is better than humans in many of the tasks these jobs are made up of right now, and it is improving and accelerating faster than only a very few number of human beings can. Anthropic CEO Dario Amodei was criticized for going out on a limb and projecting a 20 percent unemployment rate in the near future. But he’s the closest thing to a truth teller here. There will be new jobs eventually—but the timing matters most, and the rate of job destruction is nearly certain to be many multiples of the rate of creation for the next few years, exactly the time horizon in which the AI cold war will be most intense. The grueling truth is that most 50-year-old unemployed symbol analysts are not going to find new jobs this decade. Most 20-year-olds have no idea what they should learn or what skills they should gain to have a chance at employment.
These are looming human tragedies as well as the raw ingredient of political dynamite that might make the populist waves of the last decades seem mild. Universal basic income—even if it were politically feasible—won’t solve this problem because the numbers simply don’t make sense. Labor income replacement isn’t the answer. A stake in capital is far more meaningful. Americans need to own those stakes broadly. On this logic, the Trump administration once again has a part of the puzzle in place with the concept of early capital stakes for newly born kids. This pilot program needs to be radically and dramatically expanded, to nearly the entire working population of the U.S. And it needs to happen soon—as in this year.
These are dramatic policy propositions. They will be criticized as being far outside the Overton window. Or ideologically polarizing. Or excessive in terms of urgency—why not just wait and see how the AI race plays out? The answer goes back, once again, to disruptive innovation logic and the blindness of incumbent leaders to the rate of change. What’s true for AI in that respect is true for the Sino-American AI cold war. If we want to win, and we absolutely should, then radical actions of this kind are much less risky than waiting for the Chinese to find a better way.
















