OpenAI has announced the launch of a new AI supercomputer network protocol, MRC (Multipath Reliable Connection), and has open-sourced it via the Open Compute Project (OCP). The technology is co-developed by OpenAI and industry partners including AMD, Microsoft, NVIDIA, Intel, Broadcom, and others. Its goal is to address the data-transmission bottleneck among GPUs in massive AI training clusters.
The real bottleneck in AI training is how GPUs communicate with each other
OpenAI says that as the number of weekly users of ChatGPT has surpassed 900 million, AI systems are gradually becoming infrastructure-level services. To support the training and inference needs of next-generation models, OpenAI believes that not only the models themselves need to evolve, but the network architecture must be redesigned as well.
In a technical article, OpenAI points out that during training of large AI models, a single training step may involve data exchanges between millions of GPUs. As long as just one transmission has latency, it can cause the entire training synchronization to stall, leaving large numbers of GPUs idle.
And as AI supercomputers keep getting larger, issues such as network congestion, switch failures, and latency jitter will rapidly become worse. OpenAI believes this is also one of the core technical challenges in the Stargate supercomputer project.
Most data-center network architectures in the past used single-path transmission. But the biggest change brought by MRC is that the same piece of data can be simultaneously distributed across hundreds of paths.
What is MRC? OpenAI: Make AI networks automatically dodge obstacles
According to OpenAI and AMD, the core concept of MRC is:
Break data into pieces and route it across multiple paths at the same time
Automatically bypass failures at the microsecond level
Reduce latency caused by network congestion
Keep GPUs operating in sync
AMD describes that traditional AI networks are like high-speed highways that only take a single route—once there’s traffic or an accident, the whole progress is affected. MRC, in contrast, is like an intelligent traffic system with real-time detour capability. AMD even said directly: “The real bottleneck in scaling AI is no longer GPUs and CPUs, but the network.”
Why does OpenAI design a network protocol itself?
The signal from this OpenAI release is very clear: AI competition is no longer just model competition, but a competition for the entire “supercomputer infrastructure.” OpenAI mentions in the article that before Stargate appeared, they had jointly maintained three generations of AI supercomputers with their partners. From these experiences, OpenAI concluded that to effectively use compute power at Stargate scale, the entire stack must significantly reduce complexity—including the network layer.
In other words, in the future Frontier Model competition, it will not be only about who has the stronger model, but who can operate hundreds of thousands, even millions of GPUs more efficiently in sync.
MRC is behind Stargate: OpenAI’s Manhattan project
The background behind MRC is actually Stargate LLC. Stargate is a large AI infrastructure initiative pushed by OpenAI, SoftBank Group, Oracle Corporation, and MGX, with an initial goal of investing up to $50 billion in AI infrastructure in the United States. OpenAI says that it has already surpassed the original milestone target of 10GW, and that in the most recent 90 days, more than 3GW of new AI infrastructure capacity has been added.
The Stargate supercomputer located in Abilene, Texas is one of the main deployment sites for MRC. OpenAI points out that MRC has been integrated into the latest 800Gb/s network interface and is running in real large-scale training clusters.
This article where OpenAI releases the MRC supercomputer network protocol! Collaborating with Nvidia, AMD, and Microsoft to build Stargate infrastructure was first published on ChainNews ABMedia.
Related News
ChatGPT launches Excel and Google Sheets: GPT-5.5 logs in directly to spreadsheets, with a three-way showdown between Copilot and Gemini
Meta is developing an AI assistant named Hatch to rival OpenClaw, completing internal testing by the end of June
Anthropic launches 10 financial AI agents, integrating with Microsoft 365 to easily handle finance tasks
OpenAI, Anthropic Launch PE-Backed AI Services Acquisition Ventures
OpenAI mobile supply-chain update: MediaTek exclusive processor? Mass production schedule advanced to the first half of 2027