Artificial General IntelligenceArtificial intelligenceBreaking NewsChinaDepartment of DefensemilitaryNational SecurityOpinionRussiaTechnologyU.S. Military

The Pentagon’s Dangerous Adoption of AI – Emma Isabella Sage

The Pentagon’s breakneck plunge into AI integration accelerated last summer after it awarded contracts to OpenAI, Anthropic, Google, and xAI to implement their models on Department of Defense systems. The planned integrations have gone into effect one by one, with xAI’s Grok going live in late January 2026. Many, myself included, charitably assumed that the plan was to put these models to work on mundane, back-office tasks. 

Then came the earth-shattering news that Anthropic’s Claude was involved in the operation to capture Venezuelan dictator Nicolás Maduro. While the precise nature of this involvement has not yet become public knowledge, the fallout could end Anthropic’s relationship with the Pentagon. 

I am one of many researchers who have contributed to the increasingly voluminous body of work on AI governance in defense. Those guidelines tend to be conservative by design, but by even the loosest standards, the recent integrations should have been prohibited. That’s not because artificial intelligence has no role in national defense, but because this particular approach is fundamentally irresponsible.

Anthropic in the spotlight.

Anthropic’s $200 million DOD contract is jeopardized by an increasingly heated conflict over ethical safeguards for the use of its tools—specifically, the company’s unwillingness to allow Claude to be used for autonomous lethal operations or domestic surveillance. Claude is the only one of the four AI tools not on the government’s GenAI.mil platform for military and civilian employees of the Defense Department, but conversely, the only one integrated into classified systems. Its involvement in the Maduro operation occurred via its partnership with Palantir, which is itself an ethical quagmire

Source link

Related Posts

1 of 607