White House AI pre-trial review mechanism discussed leaked; Hassett’s remarks denied by the official the next day

MarketWhisper

白宮AI預審機制討論

Kevin Hassett, director of the White House National Economic Council (NEC), said in a Fox Business interview on May 7 that the Trump administration is studying an executive order requiring AI models to undergo government security reviews before they are publicly released, drawing an analogy to the FDA’s pre-market approval process for drugs. But according to Politico’s report on May 8, senior White House officials later said the remark was “selectively quoted.”

Event Timeline: The New York Times report makes White House policy contradictions public

On May 4, 2026, the New York Times reported that the White House was discussing creating a pre-release review mechanism for AI models, which was characterized at the time as “under consideration.” On May 7, 2026, in an appearance on Fox Business, Kevin Hassett publicly said: “We are looking at whether we can, via an executive order, require that future AI, which might create vulnerabilities, can only be deployed after it demonstrates that it is safe—just like FDA drugs.”

Late on the night of May 7, 2026, White House Chief of Staff Susie Wiles posted on X that the government “is not responsible for picking winners and losers,” and said that the security deployment of powerful technology should be driven by “America’s outstanding innovators rather than bureaucratic agencies.” Based on the record of her official account, the post was the fourth piece of content published since Wiles created her account.

Citing three anonymous sources, Politico reported that the White House is discussing having intelligence agencies conduct preliminary assessments before AI models are publicly released. One U.S. government official said in the report that one purpose of the move is to “ensure the intelligence community studies and uses these tools before adversaries like Russia and China understand the new capabilities.”

Agencies Involved and Policy Framework

CAISI expands voluntary AI safety assessment agreements

The AI Standards and Innovation Center (CAISI), under the Department of Commerce, announced this week that it has signed AI safety assessment agreements with Google DeepMind, Microsoft, and xAI, expanding the scope beyond OpenAI and Anthropic, which were previously included. CAISI’s voluntary assessment framework has been in place since 2024.

Deputy Defense Secretary publicly supports pre-evaluation mechanism

On May 8, 2026, Deputy Secretary of Defense Emil Michael, speaking at an AI conference in Washington, publicly supported a government pre-evaluation before the public release of AI models, and cited Anthropic’s Mythos system as a reference case, saying that such models “will eventually show up,” and that the government must establish response mechanisms.

Trump administration and Anthropic policy backdrop

According to Politico, in March 2026, Defense Secretary Pete Hegseth put Anthropic on a risk list citing supply-chain risks, and banned its models from being used for Department of Defense contracts; afterward, Trump separately required federal agencies to stop using Anthropic products within six months. Meanwhile, last month Anthropic disclosed that its AI system Mythos has powerful software vulnerability-discovery capabilities that go beyond the safety thresholds required for public release, and multiple federal agencies subsequently submitted requests to integrate it. On May 8, 2026, OpenAI announced a limited preview of GPT-5.5-Cyber, a new tool designed to detect and fix network vulnerabilities.

Industry opposition to mandatory review mechanisms

Daniel Castro, president of the Information Technology and Innovation Foundation (ITIF), said in a Politico report: “If pre-market approval can be denied, that’s a big problem for any company. If one competitor gets approved and another doesn’t, the weeks or months gap in market access will have a huge impact.” ITIF funders include Anthropic, Microsoft, and Meta.

In the same report, a senior White House official said: “There is definitely one or two people who are very enthusiastic about government regulation, but they are just a few.” The official was granted anonymity on the grounds that the discussion involves sensitive policy matters.

Frequently Asked Questions

When and where did Kevin Hassett make the AI pre-review remarks?

According to Politico, on May 7, 2026, during a Fox Business interview, Kevin Hassett publicly said that the government is considering an executive order requiring AI models to pass government security reviews before they are released, drawing an analogy to the FDA drug approval process.

What is the basis for the White House denying Hassett’s remarks?

According to Politico’s May 8, 2026 report, senior White House officials said Hassett’s remarks were “a bit selectively quoted,” and that the White House’s policy direction is to partner with companies rather than pursue government regulation. Chief of Staff Susie Wiles also posted again to reaffirm that the government does not intervene in market choices.

What new AI safety assessment agreements did CAISI add this week?

According to CAISI’s statement this week, the newly added agreements cover Google DeepMind, Microsoft, and xAI, in addition to OpenAI and Anthropic that were already covered. The voluntary assessment framework has been in place since 2024.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

RLWRLD Releases RLDX-1 AI Model for Industrial Robotic Hands

RLWRLD, a robotics AI startup backed by LG Electronics, unveiled RLDX-1, a foundation model designed for five-finger robotic hands in industrial applications, according to RLWRLD. The company released the model's weights, code, and technical documents on GitHub and Hugging Face. Model

CryptoFrontier2m ago

DeepMind AlphaEvolve cross-domain performance: 4×4 matrix multiplication refreshes the 1969 Strassen record, Gemini training is 1% faster

Google DeepMind on May 7 (U.S. time) released a report on AlphaEvolve’s cross-domain breakthroughs. DeepMind’s official blog summarizes AlphaEvolve’s concrete progress since its launch: it found a 4×4 complex matrix multiplication method better than the Strassen 1969 algorithm (48 pure scalar multiplications), collaborated with mathematicians such as Terence Tao to solve multiple Erdős mathematical problems, saved 0.7% of global computing resources for Google data centers, increased the speed of the key kernels trained with Gemini by 23%, and reduced overall Gemini training time by 1%. Architecture: Gemini Flash wide exploration + Gemini

ChainNewsAbmedia15m ago

OpenAI Codex launches a Chrome extension: can test Web Apps in the browser, pull Context across pages, and run in parallel

On May 7, OpenAI (U.S. time) released Codex’s Chrome extension, enabling Codex coding agents to run directly inside Chrome browsers on macOS and Windows. OpenAI’s official Codex documentation explains that the extension allows Codex to test web apps without taking over the user’s browsing, gather context across multiple tabs, use Chrome DevTools, and carry out other tasks in parallel. OpenAI also announced that Codex’s weekly active users exceed 4 million, up 8x since the beginning of the year. Things that can be done in the browser: test web apps, get context across pages, use DevTools Chrome extension

ChainNewsAbmedia18m ago

OpenAI launches GPT-Realtime-2: brings GPT-5 reasoning into voice agents, with context up to 128K

OpenAI on May 7 (U.S. time) announced three new Realtime voice models at its developer conference: GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper. All are available to developers via the Realtime API. In its official announcement, OpenAI said GPT-Realtime-2 is the first OpenAI model to have GPT-5

ChainNewsAbmedia20m ago

On-site visit to China’s AI laboratories: Researchers reveal that the “chip and data gap” is the key to the China-U.S. divide

Nathan Lambert, who conducted in-depth visits to multiple AI labs across China, pointed out that China’s core advantages lie in its culture, talent, and pragmatic mindset. Research prioritizes improving model quality, with students serving as key contributors, and organizations experiencing less internal infighting; however, there are gaps in chips, data, and creativity. External computing power is constrained by U.S. regulations, and low data quality has driven the push to build in-house training environments. Companies open-source but retain core technologies for their own fine-tuning; if the U.S. tightens the open ecosystem, it could affect China’s global leading position.

ChainNewsAbmedia29m ago

A16z Leads $16M Funding Round for Swedish AI Startup Pit

According to Bloomberg, Swedish AI startup Pit raised $16 million in a funding round led by Andreessen Horowitz, with participation from Lakestar and executives from major AI and tech firms. Sweden's Stena and Lundin families also joined the round. Pit has already secured pilot projects with Voi,

GateNews33m ago
Comment
0/400
No comments