February 28 News: Sam Altman announced today on X platform that OpenAI has reached an agreement with the U.S. Department of Defense to deploy its AI models on classified U.S. networks. Altman stated that throughout all interactions, the Department has shown a high level of concern for AI safety and aims to achieve optimal application results through collaboration.
Altman emphasized that the safety and inclusiveness of AI are core missions for OpenAI. The two key safety principles proposed by the company—prohibiting large-scale surveillance domestically and ensuring human responsibility for the use of force—have been recognized by the Department and incorporated into laws and policies. These principles are also included in the formal agreement between the two parties.
To ensure the safe operation of the models, OpenAI will deploy Functionality Enhancement Devices (FDE) and operate only on cloud networks. Additionally, both sides plan to develop technical safeguards to prevent misuse or abnormal behavior of the models. Altman also called on the U.S. Department of Defense to offer the same cooperation terms to all AI companies, emphasizing the importance of fair competition and shared responsibility.
Overall, this agreement not only marks the U.S. government’s official adoption of cutting-edge AI technology within sensitive networks but also highlights the practical implementation of AI safety and ethical responsibility in military scenarios. Altman stated that OpenAI will continue to work toward making AI beneficial for all humanity while promoting responsible technological development in complex, chaotic, and even dangerous environments.
This collaboration has garnered widespread industry attention, seen as potentially opening new pathways for AI applications in defense and critical infrastructure, while also sparking discussions about the safety of autonomous weapon systems and human decision-making responsibilities.