Gate News reports that U.S. federal judge Rita Lin in San Francisco has ruled that the Pentagon’s actions against Anthropic may violate the First Amendment and constitute retaliation against free speech. The company previously publicly condemned the “War Department’s” actions, prompting a strong government response. The lawsuit has now been transferred to a California court, where Anthropic is seeking to revoke its “Supply Chain Risk” label, which led to the government banning its AI model Claude.
The judge noted that the government’s measures appear unrelated to national security needs and seem more like punishment against Anthropic. Michael Mungan, representing Anthropic, stated that such actions have not occurred before, the scope of authority is narrow, and it does not justify a complete ban on AI technology. The judge emphasized that the core issue is whether the government acted unlawfully, not the legitimacy of halting AI use.
Previously, Anthropic had collaborated with multiple federal agencies and signed a $200 million contract with the Pentagon to operate systems on classified networks. However, negotiations stalled when integrating Claude into the GenAI.mil platform. The Pentagon sought full access for military purposes, but Anthropic explicitly opposed using its technology for military applications and sought stricter usage restrictions. government attorney Hamilton stated that the company is not refusing the contract but questioning the legality of military use.
Recently, Claude’s AI system was upgraded to automatically operate users’ computers to complete tasks, such as presenting slides or sending meeting invites with attachments, entering the ranks of AI agents that can run without continuous input. The popularity of OpenClaw has accelerated this trend, with Anthropic and OpenAI models capable of receiving tasks via WhatsApp or Telegram, accessing local files, and enabling efficient automation.
This ruling not only concerns the military use and contract compliance of AI technology but also raises concerns about potential abuse of power by the U.S. government in digital innovation. The Anthropic case highlights the complex struggle between AI development, free speech, and technological autonomy, with potential future impacts on more innovative companies and the global AI ecosystem.