On March 3, news continued to spread about the AI cooperation agreement between OpenAI and the U.S. Department of Defense. Several current and former employees recently discussed the safety boundaries, potential uses, and differences in stance with Anthropic on social platform X, sparking widespread attention in the tech and policy sectors.
OpenAI CEO Sam Altman previously confirmed that the company had reached a cooperation agreement with the U.S. Department of Defense, allowing the use of OpenAI’s AI models under certain conditions. The partnership quickly sparked controversy because another AI company, Anthropic, had previously refused similar terms, citing concerns that their model Claude could be used for large-scale domestic surveillance or autonomous lethal weapons systems.
In response, OpenAI researcher Boaz Barak stated that describing OpenAI’s contract as “weakening safety red lines” is inaccurate. He pointed out that, based on current information, the restrictions on AI model usage in OpenAI’s agreement may be stricter than those in Anthropic’s previous contract, especially regarding preventing large-scale domestic surveillance and autonomous weapons.
Meanwhile, former OpenAI policy research director Miles Brundage offered a different perspective. He believed that, given the public statements from external lawyers and the Pentagon, some employees might suspect the company made concessions during negotiations and tried to frame these concessions as technical cooperation. However, he later added that OpenAI’s internal structure is complex, and the teams involved in negotiations may indeed be working to secure reasonable outcomes.
Another OpenAI engineer, Clive Chan, revealed that the company’s contract includes safeguards prohibiting the use of AI for large-scale surveillance or autonomous lethal weapons. He also stated that he is pushing for more transparency about the agreement to address public concerns.
Research scientist Muhammad Bavarian argued that it is unfair for the Pentagon to view Anthropic as a supply chain risk. He believes that if both sides remain calm during negotiations, there is still an opportunity to bridge their differences.
Additionally, researcher Noam Brown pointed out that the initial wording of the agreement did raise some questions, such as the possibility of AI being used for new forms of legal surveillance. However, after OpenAI updated its official blog, these statements have been adjusted.
Noam Brown also emphasized that the application of AI in national security agencies should undergo democratic review, rather than being decided solely by tech companies or intelligence agencies. Currently, plans to deploy related AI systems to agencies like the NSA have been temporarily halted pending further policy discussions.
This incident highlights the complex controversies surrounding the use of AI technology in national security. As AI capabilities continue to advance, discussions about AI military applications, surveillance risks, and ethical governance are expected to intensify.