
Meaningless assurances.
In principle, the Pentagon’s position is defensible. Neither Anthropic nor any other private firm should have veto power over how the military conducts business. So long as Pete Hegseth’s department behaves lawfully, it’s fulfilling its obligations. If Amodei disagrees, he should expect to be dropped as the department’s AI vendor of choice.
When I say that this story is at bottom a humdrum contract dispute, that’s what I mean. Picking a side is hard—or would be in a world in which Americans hadn’t chosen to put a mafioso and his dissolute Fox News-host sidekick in charge of the U.S. military.
Because we don’t live in that world, we’re stuck with this question: Why would anyone take this kakistocracy’s word for it when it promises to follow the law in using Anthropic’s AI?
“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal),” Pentagon spokesman Sean Parnell said Thursday. That parenthetical is adorable. The Department of Defense (not War) has already conducted illegal operations under Trump’s and Hegseth’s leadership and is poised to do so again in Iran on a grand scale. The defense secretary himself has, in barely veiled terms, encouraged American troops to commit war crimes and moved to punish Democrats who simply warned service members not to obey unlawful orders.
The Trump-Hegseth Pentagon functions, by design, in a way that makes it difficult to know what the law is. Its assurances that Anthropic’s AI will be used exclusively for legal purposes are worth less than nothing.
“It is unbelievably rare that corporate ethics constrain government behavior, as opposed to the other way around,” Emma Isabella Sage marveled Wednesday in her piece for The Dispatch on this dispute, but it can’t be otherwise under the circumstances. Americans opted to be led by amoral postliberal cretins, so it fell to Dario Amodei to supply the restraints on his own technology.
Extortion.
The ethical vacuum inside the federal government isn’t the only hallmark of Trumpism in this case. Hegseth’s ruthless, possibly unlawful, and likely counterproductive approach to Anthropic also stinks of it.
Negotiations that involve the president and his cronies invariably lead to threats, which I’m sure they would rationalize as a matter of driving a hard bargain. But it’s possible to drive a hard bargain without vowing to ruin your opponent if he won’t agree to your terms or to seize what you want from him forcibly if he won’t hand it over.
That’s called extortion. Give us Greenland, or we might invade. Change your university’s rules, or we’ll end your federal funding. Settle my lawsuit against you, or your merger won’t be approved. Vote the way I want on this bill, or I’ll endorse your primary challenger. “Negotiations” with the current White House are exercises in coercion, with the federal government’s mind-boggling economic influence typically supplying the needed leverage.
That’s precisely what Hegseth is doing by abusing the “supply chain risk” designation with Anthropic: attempting to ruin the company by making it persona non grata to defense contractors nationwide. If he gets away with it, he could pull the same extortionate stunt on any other firm in the United States that does business with the Pentagon to pressure it to do his and Trump’s bidding. Dean Ball, a former AI adviser to the administration, didn’t mince words about it in an interview yesterday with The Bulwark. “This would be one of the worst things for the American business climate I have ever seen the government do,” he said.
But it’s worse than that. Hegseth was and maybe still is also considering using the 1950 Defense Production Act, a law that lets the president direct private industry to produce certain “critical and strategic” goods, to force Anthropic to drop its ethical restrictions on how the military uses its AI software. That’s literally coercive, beyond even what the Trump administration is usually willing to stoop to. And in this case, paired with the “supply chain risk” threat, it’s incoherent: “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models,” Ball told Politico.
Coherent or not, using the DPA to compel Anthropic’s acquiescence is a logical move for an administration of right-wing socialists that’s already carved out equity shares for itself from a number of private companies. Hegseth and the White House aren’t seizing the means of production by claiming ownership of Amodei’s company, but by presuming to dictate the contractual terms under which Anthropic’s intellectual property is used, they’re converting a private enterprise into a sort of state asset.
To any fascist movement, those outside of it are either servants or outlaws. The DPA is an attempt to make Anthropic a servant; the “supply chain risk” nuke is an attempt to make it an outlaw. Go figure that Amodei might not trust the Pentagon to distinguish Americans from enemies when deploying killer AI-run drones if Hegseth can’t distinguish Americans from enemies when deciding who is and isn’t a risk to the supply chain.
Silly season.
I can’t help feeling a little silly getting exercised about this, though. It’s all so … familiar.
The dumb and nasty demagoguery being used by the White House and its flunkies to defend their position is familiar. “It’s a shame that [Dario Amodei] is a liar and has a God-complex,” Defense Undersecretary Emil Michael complained. “He wants nothing more than to try to personally control the U.S. military and is ok putting our nation’s safety at risk.”
I do not believe Amodei wants to personally control the U.S. military. (AI overlords have grander ambitions.) But if he did, how tremendously stupid would one have to be to put his technology in charge of classified military systems and to demand fewer restrictions on that technology?
Michael feels obliged to post dreck like that for the same reason Hegseth felt obliged to threaten to go nuclear on Anthropic, I assume. The domineering culture of Trump’s administration requires it. If you’re not behaving with gratuitous, off-putting, and probably counterproductive belligerence toward your opponents, you’re not “fighting” ruthlessly enough.
The fact that the White House is on the wrong side of public opinion in this matter is also quite familiar.
It happens a lot nowadays, as I noted recently, and will happen again if the president pulls the trigger on attacking Iran. It’s happening in the Anthropic dispute, too. Earlier this month David Shor’s firm polled the issue by asking respondents which comes closer to their view: Should the government require unrestricted access to all U.S. AI technology to ensure that we stay ahead of China or should private companies be allowed to set ethical limits on how the government uses their technology?
Overall, the public split 21-54 on that choice. Swing voters split 24-51. Even Trump voters split 28-44. “The people unsurprisingly do not want killer robots and do not trust Trump/Hegseth/the Republican party to do the right thing without limits,” Shor concluded. I suspect the former is more of a factor than the latter—we’ve all seen The Terminator—but whatever the explanation, Americans appear to be on Anthropic’s side. If there’s an unpopular position on any issue, rely on the Trump administration to find it, claim it, and be really boorish about it.
The fact that Congress is nowhere to be found in this fiasco is also familiar, needless to say.
“While it’s nice that Anthropic is digging in their heels here, it’s insane that such questions as ‘how much killing will we let the killer robots do on their own’ are being hashed out as back-room handshakes between the military and its AI contractors in the first place,” Andrew Egger observed at The Bulwark, wondering where our august legislature is in all this. Former Air Force Secretary Frank Kendall made the same point in an op-ed today for the New York Times, calling on Congress “to pass, as part of comprehensive AI regulation, restrictions on the most dangerous uses of these tools despite the Trump administration’s strong resistance to such limits.”
Seems logical. Seems impossible, too: The president will not allow congressional Republicans to tie his hands in setting policy for what will soon be the most lucrative and powerful industry on Earth, assuming it isn’t already. If you thought he liked tariffs because of the quasi-dictatorial power that his monopoly over trade granted him, wait until he gets a taste of playing favorites with AI. Democrats would need to win close to a supermajority in both houses of Congress this fall to pass AI regulation over Trump’s veto next year, and that’s not happening.
And even if it did, you know how he feels about laws he doesn’t like.
Democracy and nationalism.
There’s one more thing that’s familiar about the Anthropic episode. Like so many of the daily political dramas we get spun up about, it probably won’t matter much.
None of us believes that the government will ultimately fail to use AI to surveil Americans or build self-guided killer drones, do we? The latter, at least, is a military necessity: Once China fields fully autonomous airborne death merchants powered by superintelligence, the United States will have no choice but to keep pace. At the rate drone warfare is progressing in Ukraine, we might even see that sort of weapon deployed in battle if fighting drags on for another year or two.
Amodei acknowledges it too, noting in his statement yesterday that “fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense.” The technology isn’t ready yet, he stresses, and shouldn’t be deployed without proper “oversight” and “guardrails,” but he never calls it morally unconscionable or categorically rules out supplying it.
His position is “not yet,” not “never.” Not now—but soon, and probably sooner than we think.
The Pentagon doesn’t appear willing to wait, though, and might not have to. Recently, to put pressure on Anthropic, it signed a deal making xAI the second artificial intelligence firm authorized for use in its classified systems. If that name rings a bell, it’s because xAI is Elon Musk’s company; it’s the outfit behind Grok, the chatbot that serves Musk’s social media platform, Twitter.
The one that once turned Nazi and began calling itself “MechaHitler.” The one that let Twitter users create nearly naked sexual images of women—and children. That one. Pete Hegseth’s Pentagon likes it because, and here I quote the Wall Street Journal, “The looser controls on Grok, and Musk’s absolutist stance on free speech, have made it a more attractive choice to the Pentagon.”
That “loose” AI will be the one that replaces Anthropic in federal systems and will soon be handling mass surveillance and killer drones, presumably.
See why I say it’s hard to get worked up about this? Skynet is coming; which corporate logo it bears when it arrives seems not very important.
Even so, I appreciate Amodei showing some spine. One line in his Thursday statement stood out: In explaining why his firm’s technology shouldn’t be used in certain military applications, he wrote, “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” With postliberals in charge of the United States and its military, it feels vaguely scandalous for a figure of influence to declare that liberalism should take priority over the nationalist imperative to target one’s enemies with utmost ruthlessness.
How nice to know that not every guy who’s careening toward the singularity that will destroy the world is a chud.
















