aianthropiccontractsdario amodeiDefense DepartmentdodFeaturedOpenAIsam altmansurveillanceuncertainty

OpenAI Rewrites Contract, Anthropic Returns to Negotiate—The Chaos Continues

from the the-uncertainty-tax-at-work dept

In less than a week, the Pentagon blacklisted an AI company for having ethics, declared it a supply chain risk, watched its preferred replacement face a massive user revolt, and then sat down to amend the replacement’s contract to address the very concerns the blacklisted company had been raising all along. Meanwhile, the blacklisted company is reportedly back in negotiations with the same Pentagon that tried to destroy it, because—wouldn’t you know—its models are apparently better for what the military actually needs.

On Monday night, Sam Altman posted on X that OpenAI had amended its Defense Department agreement to include new language explicitly addressing domestic surveillance:

We have been working with the DoW to make some additions in our agreement to make our principles very clear.

1. We are going to amend our deal to add this language, in addition to everything else:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Is this better than the original contract language we flagged earlier this week? Probably! The explicit mention of “commercially acquired personal or identifiable information” is new and addresses the exact data type—geolocation, browsing history, the stuff data brokers sell about all of us—that reportedly was the final sticking point in the Anthropic negotiations. The language about “deliberate tracking, surveillance, or monitoring” is more concrete than the original contract’s vague reference to “unconstrained monitoring.”

Altman also noted that the Defense Department “affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA)” and that any such use “would require a follow-on modification to our contract.”

This sounds better than where they were before, but it’s genuinely hard to tell from the outside. And that difficulty—the opaque nature of what any of this means in practice—is the actual story here.

Because the problem with OpenAI’s deal was never just about the specific contract language. As we laid out earlier this week, the intelligence community has spent decades engineering legal definitions that let it conduct what any reasonable person would call mass surveillance while truthfully claiming otherwise. Whether this new amendment survives contact with those definitions is a question no outside observer can answer right now.

The bigger issue is happens to innovation when the rules can change based on a cabinet secretary’s mood. The contract still references compliance with existing legal authorities—the same authorities that have been stretched and reinterpreted for years to permit exactly the kinds of data collection the new language purports to prohibit.

Anthropic’s Dario Amodei was characteristically blunt about the gap between OpenAI’s public framing and what the contract language actually delivers. In a memo to staff that has since leaked:

“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”

Damn.

He called OpenAI’s messaging around the deal “straight up lies” and described the whole thing as “safety theater.” You can dismiss some of that as competitive sniping, but Amodei was in the room for the Anthropic negotiations, and his characterization of what the Pentagon was actually demanding lines up with what the New York Times separately reported. His criticism is specific and technical: the Pentagon asked Anthropic to delete a “specific phrase about ‘analysis of bulk acquired data’” that was “the single line in the contract that exactly matched this scenario we were most worried about.” OpenAI’s original contract conspicuously lacked any such language. The amendment addresses this, at least on its face. Whether it does so in a way that actually binds the Pentagon’s behavior is a different question.

But the contract language debate, as important as it is, obscures the much larger problem.

Look at what happened at OpenAI’s all-hands meeting on Tuesday. According to a partial transcript reviewed by CNBC, Altman told his employees this:

“So maybe you think the Iran strike was good and the Venezuela invasion was bad…. You don’t get to weigh in on that.”

That’s the CEO of one of the most important AI companies on the planet telling his workforce that operational decisions about how their technology gets used in military actions are entirely up to Defense Secretary Pete Hegseth. The same Pete Hegseth who, just days earlier, tried to nuke an entire company for asking that AI not make autonomous kill decisions. The same Hegseth whose idea of contract negotiation was to issue what we described earlier this week as a “corporate death penalty” against Anthropic.

Speaking of Anthropic, that situation has gone from tragedy to farce and back again. The Financial Times reports that Amodei is now in direct talks with Emil Michael, a Hegseth lackey, to try to salvage a deal. This is the same Emil Michael (a scandal-ridden former Uber exec) who, just last week, called Amodei a “liar” with a “God complex”. And the same Defense Department that designated Anthropic a supply chain risk. The same administration that directed every federal agency to “immediately cease” all use of Anthropic’s technology.

And yet here they are, back at the table. Because, as multiple reports have made clear, Anthropic’s Claude models were already deployed on the Pentagon’s classified network and were quite useful for the Defense Department. The Pentagon apparently needs Anthropic’s technology because it’s actually good at the job. This just highlights how monumentally stupid the whole “supply chain risk” gambit was. You don’t issue a corporate death penalty against a company whose product you’re actively relying on for military operations unless you’re operating on pure spite rather than strategy.

The public, meanwhile, is making its own calculations under this cloud of uncertainty. ChatGPT uninstalls spiked 295% the day after the OpenAI deal was announced, while downloads dropped significantly. Anthropic’s Claude app jumped to the top of the App Store. One-star reviews of ChatGPT surged nearly 775% over the weekend.

Users who have zero ability to evaluate the legal intricacies of EO 12333 or the practical significance of “commercially acquired personal or identifiable information” are making choices based on the clear understanding that something has gone seriously wrong.

Call it the uncertainty tax: when users can’t verify whether a company’s principles are real, they treat visible conflict with authority as proof of authenticity. When people can’t tell whether a company’s safety commitments are real, they default to the company that got punished for having safety commitments—because at least that tells you that there were at least some principles at play.

Getting punished for having principles is, perversely, the clearest indication that you had any, whether or not it’s true.

Altman himself seems to recognize that the rollout was a disaster. From his post:

One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.

“Looked” opportunistic is doing a lot of work in that sentence. But okay.

The deeper issue here goes beyond any one contract or any one company. What we’ve watched unfold over the past week is a case study in why you cannot build a functional technology industry under a petulant, arbitrary authoritarian regime.

This is now what every AI company knows: if you tell the government “no” on something—even something as basic as “our AI shouldn’t make autonomous kill decisions without human oversight”—the Defense Secretary may try to destroy your company, publicly call you treasonous, and bar anyone doing business with the military from working with you. If you tell the government “yes,” you may face a massive consumer backlash, lose hundreds of thousands of users, and find yourself amending contracts on the fly to address concerns you should have thought about before signing.

Seems like a rough way to encourage innovation in the AI space.

And the rules can change at any moment. This week it’s “give us unrestricted access for all lawful purposes.” Next week, the definition of “lawful” might shift. The week after that, maybe the administration decides it doesn’t like something else about your company and the threats start anew. Altman told his employees that Hegseth made clear OpenAI doesn’t “get to make operational decisions.” So the company writes the safety stack, crosses its fingers, and hopes the people who just tried to destroy its largest competitor over basic ethical commitments will honor the contract language.

This is the environment the AI industry’s biggest Trump boosters created for themselves. For months, the refrain on certain VC bro podcasts was that the Biden administration was going to destroy AI and hand the industry to China. In reality, Biden’s AI policy amounted to a toothless set of principles and some extra paperwork. It was annoying, sure. It did not involve the Defense Secretary threatening to obliterate companies or the president directing all federal agencies to stop using a specific American company’s technology.

And the irony of it all is that the market seems to be figuring this out even as the companies’ leadership teams scramble to pretend everything is fine. The same users who were happily using ChatGPT a week ago are fleeing to Claude—the product of the company the government tried to destroy—because they’ve correctly identified that a company that got punished for standing up to an authoritarian government is probably more trustworthy than one that rushed to fill the void.

Innovation requires predictability. It requires the ability to plan, to hire, to build product roadmaps that extend beyond next Friday’s presidential tweet. It requires knowing that if you build something good and compete fairly, the government won’t try to destroy you because you annoyed a cabinet secretary during contract negotiations. Every AI company—even the ones currently benefiting from Anthropic’s punishment—should be deeply unsettled by what happened last week.

Because the leopard that ate Anthropic’s face last Friday can eat yours next Friday. All it takes is one disagreement, one insufficiently sycophantic response, one moment of “duplicity” defined as “having principles.”

Altman seems to partially grasp this. He publicly stated that the decision to designate Anthropic as a supply chain risk was “a very bad decision” and that the Pentagon should offer Anthropic the same terms OpenAI agreed to. That’s the right thing to say when facing a PR crisis like this. But saying it while simultaneously benefiting from the decision, while telling your employees they don’t get to have opinions about how their technology gets used in military operations, sends a somewhat mixed signal.

The lesson here has less to do with the specifics of any contract than with the fact that an impetuous, arbitrary, out-of-control authoritarian government is bad for innovation. I mean, it’s also bad for the public, society, and (arguably) the military as well. The US has led in innovation for decades in part because we had stable institutions and predictable rule of law.

But hey, at least nobody’s asking them to fill out compliance forms anymore. That was the real threat to American AI leadership.

Filed Under: , , , , , , , ,

Companies: anthropic, openai

Source link

Related Posts

1 of 229