According to Beating, members of the U.S. House Foreign Affairs Committee will travel to Silicon Valley next week to meet with representatives from Google, Anthropic, Meta, Tesla, Intel, Applied Materials, and Nvidia to discuss artificial intelligence and export controls. An industry roundtable is scheduled for May 4. The delegation, led by Committee Chair Brian Mast (Republican) and ranking Democrat Gregory Meeks, follows the committee’s April 22 passage of the MATCH Act by a 36-8 vote.
The MATCH Act (Multilateral Alignment of Technology Controls on Hardware) imposes comprehensive export restrictions targeting China’s chip manufacturing capabilities. The law prohibits exports of DUV lithography equipment to China and designates five Chinese entities—including SMIC, Changxin Memory, Yangtze Memory, Huahong, and Huawei—as restricted parties subject to presumed denial standards for exports, re-exports, repairs, and component supplies.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
xAI Launches Grok Custom Voices, Lets Users Clone Their Own AI Voice in One Minute
According to Beating, xAI launched Grok Custom Voices and Voice Library, allowing users to record one minute of audio in the xAI console to generate a custom voice_id for use with Grok TTS and Voice Agent APIs. The feature supports applications including customer service agents, content creation,
GateNews10m ago
After HBM, is the AI memory bottleneck HBF? Turing Award winner David Patterson: Inference will redefine storage architecture
Turing Award winner David Patterson said that as AI moves from training to large-scale inference, the next memory bottleneck may not be HBM, but HBF (high-bandwidth flash memory). HBF provides NAND Flash stacking for high capacity and low power consumption, handling the access to context and intermediate data during inference, which is different from the division of labor with speed-focused HBM. SK hynix and SanDisk are driving standardization, and they expect that by 2038, demand for HBF may surpass HBM.
ChainNewsAbmedia47m ago
Anthropic Code With Claude Developer Conference 5/6 Opens in San Francisco: Free Live Registration
Anthropic announced on May 1 that the first “Code with Claude” developer conference will open in San Francisco on May 6, and for the first time will expand into a three-city tour covering London (5/19) and Tokyo (6/10). In-person seats for the three main sessions are allocated via a lottery and have all already been selected, but live-stream registration remains open to everyone. To address demand overload for the in-person events, the San Francisco venue will add an extra “Extended” session on 5/7, designed specifically for independent developers and founders in the early stage.
Three-city tour: 5/6 San Francisco, 5/19 London, 6/10 Tokyo
Code with Claude is a developer conference hosted by Anthropic; the San Francisco main event has already been held for the second time. The content of all three events is the same: full-day on-site workshops, demos of the latest features, and Claude along every line
ChainNewsAbmedia1h ago
OpenAI releases GPT-5.5 launch-week data: API revenue growth hits a new high, Codex doubles
OpenAI posted 3 data points on May 1 from its official account after GPT-5.5 went live for one week: self-rated as the “strongest release of all time,” API revenue growth speed at more than 2x that of any previous model release, and Codex doubling its revenue in less than 7 days. OpenAI attributed the cause to the continued rise in enterprise demand for agentic coding (proxy-based coding) tools, and said it followed the same product cadence as Anthropic Mythos in a concurrent matchup with GPT-5.5-Cyber on April 30.
3 items from the disclosed data: API revenue growth pace, Codex doubling in 7 days, strongest release
The 3 key figures OpenAI shared this time come from official tweets, with no detailed financial report attached: first, GPT-5.5 is the “strongest revenue growth” model of all time…
ChainNewsAbmedia1h ago
OpenAI Launches Codex Pets, AI-Powered Virtual Companion with Custom Generation
According to Beating, OpenAI has added a new "Codex Pets" feature to the Codex desktop application, allowing users to spawn and interact with an animated virtual companion. Users can activate a pet by typing /pet in the editor. The feature functions as an agent status indicator, displaying a
GateNews1h ago
AISI assessment: GPT-5.5’s network-attack capabilities are on par with Anthropic’s Mythos
AISI released an assessment of GPT-5.5’s network attack capabilities in May: Expert difficulty 71.4%, Mythos Preview 68.6%. The gap is within the margin of error, essentially tied. GPT-5.5 has become the second system, after Mythos, that can automatically complete the 32-step enterprise intrusion of “The Last Ones.” It also found a universal jailbreak that can be developed in about 6 hours, enabling it to bypass malicious query filtering. Going forward, it will monitor the next round of evaluation timing and OpenAI’s update on this.
ChainNewsAbmedia3h ago