The Pentagon is considering cutting ties with Anthropic, with Defense Secretary Pete Hegseth close to designating the company a “supply‑chain risk.” Anthropic opposes the use of its AI model Claude in mass‑surveillance campaigns and fully autonomous military operations.
Anthropic, one of the largest artificial intelligence (AI) companies, is reportedly under fire from the Department of War over the usage of its AI models for activities considered unethical by the company.
Axios reported that Defense Secretary Pete Hegseth is considering designating Anthropic as a supply chain risk, meaning that all contracts and ties to the company would have to be cut down. In addition, all companies dealing with the Pentagon would also have to abandon Anthropic’s services.

The clash erupts as the company has refused to allow the use of Claude, its trademark model, in mass surveillance campaigns and operations involving fully autonomous military equipment. On the other hand, the Pentagon claims it should be able to harness Claude’s capabilities for “all lawful purposes,” without the company having a say in these processes.
If Anthropic is finally designated as a supply chain risk, it would mean that the model would have to be withdrawn from the Pentagon’s information systems, as only Claude has access to the organization’s classified systems.
This allowed Claude to take an active part in Operation Absolute Resolve, which led to the extraction of Venezuela’s Nicolas Maduro in January. Even when the role the model assumed during the operation has not been fully disclosed, it represents an escalation in the use of AI for military campaigns.
The standoff might establish a precedent in how AI companies can deal with governments in the Western world, setting the basis for the levels of control that these companies might maintain over their models when used for military purposes.
An Anthropic spokesperson stated that the company was having “productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right.”
Anthropic is under scrutiny from the Department of War for refusing to allow its AI model, Claude, to be used for activities it deems unethical, such as mass surveillance and autonomous military operations.
If designated as a supply chain risk, all contracts with the Pentagon would be cut, necessitating the withdrawal of Claude from the Pentagon’s information systems.
The company maintains that it is engaged in “productive conversations” with the Department of War, aiming to navigate complex ethical issues surrounding the use of its AI technology.
Claude participated in Operation Absolute Resolve, which facilitated the extraction of Venezuela’s Nicolás Maduro, marking a significant escalation in the military’s use of AI technologies.