
AI developer Anthropic CEO Dario Amodei has reopened negotiations with Emil Michael, the Deputy Undersecretary of Research and Engineering at the U.S. Department of Defense, aiming to reach an agreement before the company is officially designated as a defense supply chain risk enterprise, thereby preserving military cooperation. According to reports from the Financial Times, if Anthropic is formally recognized, it will be excluded from the U.S. military procurement network.
Last week, the negotiations suddenly collapsed. Reports indicate that Emil Michael, after a heated confrontation, accused Dario Amodei of being a “liar” and criticized him for having an “God complex.” The core of the conflict was a fundamental disagreement over a data usage clause: the Pentagon demanded that Anthropic remove the restriction in the contract that limits “analyzing large amounts of acquired data,” as a prerequisite for accepting other conditions from Anthropic.
Amodei explicitly stated in an internal memo that this clause is intended to prevent potential large-scale domestic surveillance. Anthropic positions it as an uncompromising red line comparable to prohibitions against AI being used in lethal autonomous weapons systems. Subsequently, Defense Secretary Pete Hegseth escalated pressure, warning that if no consensus is reached, Anthropic will be officially designated as a supply chain risk enterprise.
Despite the diplomatic crisis, Anthropic’s business ties with the U.S. military go far beyond public perception:
This background makes the Pentagon’s tough stance highly contradictory—on one hand relying on Claude for critical military missions, and on the other threatening to classify it as a security threat, drawing widespread industry attention.
On Wednesday, several major tech trade organizations issued a joint open letter to Trump, warning that designating a U.S.-based AI company as a supply chain risk could fundamentally undermine America’s advantage in the global AI race against China. The signatories include the Software and Information Industry Association, TechNet, the Computer and Communications Industry Association, and the Business Software Alliance, representing hundreds of U.S. tech companies such as Nvidia, Google (a subsidiary of Alphabet), and Apple.
The letter directly states, “Viewing a U.S. technology company as a foreign adversary rather than an asset” will hinder innovation and weaken the U.S. AI industry’s global competitiveness.
Q: Why does Anthropic refuse to delete the big data analysis restriction clause?
Anthropic considers this clause a core safety barrier to prevent large-scale domestic surveillance, on par with the red line against AI being used in lethal autonomous weapons. Dario Amodei explicitly states in the memo that removing this clause would be equivalent to allowing the government to conduct mass data surveillance on citizens using Claude, which is an uncompromising stance for the company.
Q: What are the actual consequences if Anthropic is designated as a supply chain risk enterprise?
Official recognition would exclude Anthropic from U.S. military and federal procurement networks. The existing $200 million contract could be terminated, and all government contractors using Claude might be forced to sever business relationships. For Anthropic, this would be not only a financial blow but also a significant hit to its overall credibility in corporate and government markets.
Q: Why are tech giants like Nvidia and Apple stepping in to defend Anthropic?
Their joint letter is not merely an endorsement of Anthropic but a warning that setting such precedents could threaten the entire U.S. AI industry. If the government can, through executive orders, label any domestic AI company as a security threat, it could create a widespread chilling effect in the industry, undermining U.S. tech companies’ confidence and innovation in the global market.