Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Trump admin to review AI models from Google, Microsoft, xAI ahead of public release
close
Markets are just starting to diverge a little bit amid Iran conflict, Josh Schafer says
Barrons Roundtable newsletter editor Josh Schafer breaks down the stock market and oil prices amid conflict in the Middle East on Varney & Co.
The Trump administration on Tuesday announced that it had reached new agreements with Microsoft, Google DeepMind and Elon Musk’s xAI to expand collaboration with Big Tech companies in researching artificial intelligence (AI) and security.
The Center for AI Standards and Innovation (CAISI), which is part of the Commerce Department’s National Institute of Standards and Technology, will work with the AI companies on pre-deployment evaluations as well as targeted research into frontier AI capabilities and AI security.
The new agreements build on previously announced partnerships between CAISI and the companies, supporting information-sharing, driving voluntary product improvements and ensuring a clear understanding in government of AI capabilities and the state of international AI competition.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said CAISI Director Chris Fall. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”
HOW AI EXPOSURE IS RESHAPING JOBS IN CREATIVE FIELDS
The Trump administration announced the AI agreements through CAISI with several leading tech companies. (Win McNamee/Getty Images)
Developers frequently provide CAISI with models that have reduced or removed safeguards to evaluate national security-related capabilities and risks.
Evaluators from across government agencies may participate in evaluations and regularly provide feedback through the TRAINS Taskforce, which is a group of interagency experts focused on AI national security concerns.
CAISI’s agreements support testing in classified environments and were drafted with flexibility to respond to continued advancements in AI.
ZUCKERBERG SAYS META LAYOFFS TIED TO AI SPENDING, WON’T RULE OUT FUTURE CUTS
Microsoft said the CAISI partnership is needed to build trust and confidence in advanced AI systems. (Cesc Maymo / Getty Images)
Microsoft chief responsible AI officer Natasha Crampton said in a release that the agreements will “advance the science of AI testing and evaluation, including through collaborative work to test Microsoft’s frontier models, assess safeguards, and help mitigate national security and large-scale public safety risks.”
Crampton said that “ongoing, rigorous testing is essential to building trust and confidence in advanced AI systems.”
ELON MUSK SAYS HE WAS A ‘FOOL’ FOR FUNDING OPENAI: REPORT
Google’s DeepMind unit also signed the new agreements with CAISI. (Marlena Sloss/Bloomberg via Getty Images)
“Well-constructed tests help us understand whether our systems are working as intended and delivering the benefits they are designed to provide. Testing also helps us stay ahead of risks, such as AI-driven cyberattacks and other criminal misuses of AI systems, that can emerge once advanced AI systems are deployed in the world,” Crampton explained.
Microsoft also announced a similar agreement with the United Kingdom’s AI Security Institute (AISI) to govern AI testing and evaluation.
GET FOX BUSINESS ON THE GO BY CLICKING HERE