Gate News update: On Wednesday, April 9, the U.S. Court of Appeals for the District of Columbia Circuit rejected Anthropic’s emergency motion seeking to pause the U.S. Department of Defense from listing it as a national security supply chain risk. A panel of three judges ruled that the government’s interest in controlling AI technology procurement during an active military conflict outweighs the financial or reputational harm Anthropic could potentially suffer.
The dispute began in July 2025, when Anthropic and the Pentagon reached a contract for the deployment of its AI model, Claude, on classified networks. Talks broke down in February this year; the government demanded that Anthropic allow the military to use Claude without restrictions, while Anthropic maintained that its technology should not be used in lethal autonomous weapons or for large-scale domestic surveillance of U.S. citizens. In late February, President Trump ordered all federal agencies to stop using Anthropic products, and Anthropic subsequently sued the Trump administration in March.
Previously, the U.S. District Court for the Northern District of California had issued a preliminary injunction that temporarily halted Trump’s directive. But due to provisions of federal procurement law, Anthropic was required to file separate lawsuits in both the U.S. District Court in California and the U.S. Court of Appeals for the District of Columbia Circuit. In this appeal, the court held that Anthropic could face some degree of irreparable harm absent a stay and stated that the case should be expedited. Acting U.S. Attorney General Todd Blanche said the ruling was a “major victory” for “military readiness capabilities.”