OpenAI's AI collaboration with the Pentagon sparks controversy: employees speak out publicly, with safety red lines and surveillance risks becoming the focus

GateNews

On March 3, news continued to spread about the AI cooperation agreement between OpenAI and the U.S. Department of Defense. Several current and former employees recently discussed the safety boundaries, potential uses, and differences in stance with Anthropic on social platform X, sparking widespread attention in the tech and policy sectors.

OpenAI CEO Sam Altman previously confirmed that the company had reached a cooperation agreement with the U.S. Department of Defense, allowing the use of OpenAI’s AI models under certain conditions. The partnership quickly sparked controversy because another AI company, Anthropic, had previously refused similar terms, citing concerns that their model Claude could be used for large-scale domestic surveillance or autonomous lethal weapons systems.

In response, OpenAI researcher Boaz Barak stated that describing OpenAI’s contract as “weakening safety red lines” is inaccurate. He pointed out that, based on current information, the restrictions on AI model usage in OpenAI’s agreement may be stricter than those in Anthropic’s previous contract, especially regarding preventing large-scale domestic surveillance and autonomous weapons.

Meanwhile, former OpenAI policy research director Miles Brundage offered a different perspective. He believed that, given the public statements from external lawyers and the Pentagon, some employees might suspect the company made concessions during negotiations and tried to frame these concessions as technical cooperation. However, he later added that OpenAI’s internal structure is complex, and the teams involved in negotiations may indeed be working to secure reasonable outcomes.

Another OpenAI engineer, Clive Chan, revealed that the company’s contract includes safeguards prohibiting the use of AI for large-scale surveillance or autonomous lethal weapons. He also stated that he is pushing for more transparency about the agreement to address public concerns.

Research scientist Muhammad Bavarian argued that it is unfair for the Pentagon to view Anthropic as a supply chain risk. He believes that if both sides remain calm during negotiations, there is still an opportunity to bridge their differences.

Additionally, researcher Noam Brown pointed out that the initial wording of the agreement did raise some questions, such as the possibility of AI being used for new forms of legal surveillance. However, after OpenAI updated its official blog, these statements have been adjusted.

Noam Brown also emphasized that the application of AI in national security agencies should undergo democratic review, rather than being decided solely by tech companies or intelligence agencies. Currently, plans to deploy related AI systems to agencies like the NSA have been temporarily halted pending further policy discussions.

This incident highlights the complex controversies surrounding the use of AI technology in national security. As AI capabilities continue to advance, discussions about AI military applications, surveillance risks, and ethical governance are expected to intensify.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)