Elon Musk confirms merging SpaceX with xAI, with a valuation reaching $1.25 trillion, planning to deploy data centers into space orbit and paving the way for a mid-2026 IPO.
(Background recap: SpaceX applies to the US FCC to launch millions of satellites to build solar-powered data centers, Musk’s space AI gamble)
(Additional background: Musk’s space company SpaceX reportedly plans to go public next year! Valuation expected to reach $800 billion, surpassing OpenAI)
Orbital data centers: avoiding Earth’s power limitations
Power grid alerts on Earth, space becomes the final frontier
How far are space computing centers? Five hurdles before landing
Elon Musk announced that his space company SpaceX and artificial intelligence firm xAI are officially merging. According to TechCrunch, this deal combines SpaceX’s $1 trillion valuation with xAI’s $25 billion valuation, creating a private enterprise with a total valuation of $1.25 trillion.
It is understood that this merger will be conducted entirely through a stock swap, with xAI shareholders exchanging shares to become SpaceX shareholders. Musk has established a new acquisition entity in Nevada to execute this deal. The general consensus is that this is a preparation for the highly anticipated SpaceX IPO in mid-2026, completing asset integration before going public.
From a financial perspective, xAI currently has a monthly burn rate of about $1 billion, urgently needing stable cash flow to support operations. In contrast, SpaceX has strong government contracts and satellite launch revenues, accounting for about 80% of total revenue, with an EBITDA margin of 50%. Previously, SpaceX and Tesla each invested $2 billion in xAI.
One of the core strategies of the merger is to address the power and cooling challenges faced by AI computing infrastructure. Currently, xAI’s land-based data center in Memphis faces environmental review challenges, and Musk’s solution is to deploy data centers into space orbit.
Orbital data centers: avoiding Earth’s power limitations
Recent reports indicate that Musk’s SpaceX submitted an application to the U.S. Federal Communications Commission (FCC) on January 30, proposing to deploy up to one million solar-powered data center satellites, aiming to move AI computing cores off the ground into near-Earth orbit.
This merger not only aims to alleviate xAI’s funding pressure but also marks the formation of an integrated “space + AI” commercial ecosystem. As the 2026 IPO approaches, how this private giant will influence global technology and capital markets remains to be seen.
Power grid alerts on Earth, space becomes the final frontier
We know that training and inference of AI models require enormous power and cooling water, but land-based data centers are being slowed by land, power quotas, and water resource restrictions.
According to the World Economic Forum, space data centers are estimated to have electricity prices as low as $0.005 per kWh, about one-fifteenth of the average wholesale price on the ground, and the vacuum environment directly eliminates the need for cooling water. This is a significant relief for traditional data centers consuming millions of tons of water.
SpaceX emphasized in its filings:
This is the first step toward a stellar civilization, not just solving current bottlenecks, but fully harnessing solar energy.
Like Musk’s past mastery of extreme goals, this statement binds energy dividends with the advancement of civilization, guiding investors to focus on long-term marginal cost advantages.
How far are space computing centers? Five hurdles before landing
Despite the inspiring vision, there are several unavoidable engineering and economic challenges between application and realization.
First, the contradiction between launch costs and deployment scale. Even though Falcon 9 has reduced the cost to about $2,700 per kilogram to orbit, and Starship aims even lower, a satellite node capable of actual computation—containing servers, solar panels, cooling systems, and communication modules—far exceeds typical communication satellites in weight. Deploying hundreds of thousands of such units requires an astronomical number of launches and costs.
Second, the computational bottleneck of space-grade hardware. GPUs and high-bandwidth memory used in ground data centers are not designed for space environments. Cosmic radiation causes single-particle upsets leading to errors; extreme temperature variations (up to 120°C on the sun-facing side and -150°C on the shaded side) pose severe stability challenges. Currently, space-radiation-hardened chips lag behind commercial consumer chips by about two to three generations.
Running large models for inference in orbit still faces fundamental hardware gaps.
Third, cooling is not as simple as imagined. Vacuum environment indeed eliminates the need for cooling water but also means no convective heat transfer, relying solely on radiative cooling. Radiative efficiency depends on surface area and temperature; satellites need large radiators, which further increase weight and size, conflicting with limited launch capacity.
The International Space Station’s cooling system weighs several tons—an example of this challenge.
Fourth, the physical limits of latency and bandwidth. Near-Earth orbit has a one-way delay of about 4 to 20 milliseconds, which seems acceptable, but laser links between satellites have far lower bandwidth than ground fiber optics. A submarine cable can transmit tens of Tbps, while current optical inter-satellite links are still in Gbps range.
For distributed training requiring massive parameter synchronization, this bandwidth gap could be fatal. Space computing is more suitable for latency-tolerant batch inference rather than real-time training.
Fifth, maintenance and upgrades are difficult. Ground data centers can replace disks, upgrade GPUs, and repair nodes at any time. Satellites in orbit, once deployed, are essentially unrecoverable for hardware repairs. When chips are outperformed by next-generation products or components degrade due to radiation, the only “upgrade” is to launch new satellites and retire old ones—bringing us back to launch costs and orbital congestion issues.
While the FCC’s final decision is still months away, this application has already moved the idea of “sending data centers into space” from science fiction to policy agenda. The future ceiling of cloud computing may not be below the sky, but at the unseen edge of the horizon.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Breaking News! SpaceX acquires xAI with a valuation of $1.25 trillion, Elon Musk creates a "Space + AI" giant
Elon Musk confirms merging SpaceX with xAI, with a valuation reaching $1.25 trillion, planning to deploy data centers into space orbit and paving the way for a mid-2026 IPO.
(Background recap: SpaceX applies to the US FCC to launch millions of satellites to build solar-powered data centers, Musk’s space AI gamble)
(Additional background: Musk’s space company SpaceX reportedly plans to go public next year! Valuation expected to reach $800 billion, surpassing OpenAI)
Table of Contents
Elon Musk announced that his space company SpaceX and artificial intelligence firm xAI are officially merging. According to TechCrunch, this deal combines SpaceX’s $1 trillion valuation with xAI’s $25 billion valuation, creating a private enterprise with a total valuation of $1.25 trillion.
It is understood that this merger will be conducted entirely through a stock swap, with xAI shareholders exchanging shares to become SpaceX shareholders. Musk has established a new acquisition entity in Nevada to execute this deal. The general consensus is that this is a preparation for the highly anticipated SpaceX IPO in mid-2026, completing asset integration before going public.
xAI burns $1 billion a month, SpaceX cash flow becomes critical
From a financial perspective, xAI currently has a monthly burn rate of about $1 billion, urgently needing stable cash flow to support operations. In contrast, SpaceX has strong government contracts and satellite launch revenues, accounting for about 80% of total revenue, with an EBITDA margin of 50%. Previously, SpaceX and Tesla each invested $2 billion in xAI.
One of the core strategies of the merger is to address the power and cooling challenges faced by AI computing infrastructure. Currently, xAI’s land-based data center in Memphis faces environmental review challenges, and Musk’s solution is to deploy data centers into space orbit.
Orbital data centers: avoiding Earth’s power limitations
Recent reports indicate that Musk’s SpaceX submitted an application to the U.S. Federal Communications Commission (FCC) on January 30, proposing to deploy up to one million solar-powered data center satellites, aiming to move AI computing cores off the ground into near-Earth orbit.
This merger not only aims to alleviate xAI’s funding pressure but also marks the formation of an integrated “space + AI” commercial ecosystem. As the 2026 IPO approaches, how this private giant will influence global technology and capital markets remains to be seen.
Power grid alerts on Earth, space becomes the final frontier
We know that training and inference of AI models require enormous power and cooling water, but land-based data centers are being slowed by land, power quotas, and water resource restrictions.
According to the World Economic Forum, space data centers are estimated to have electricity prices as low as $0.005 per kWh, about one-fifteenth of the average wholesale price on the ground, and the vacuum environment directly eliminates the need for cooling water. This is a significant relief for traditional data centers consuming millions of tons of water.
SpaceX emphasized in its filings:
Like Musk’s past mastery of extreme goals, this statement binds energy dividends with the advancement of civilization, guiding investors to focus on long-term marginal cost advantages.
How far are space computing centers? Five hurdles before landing
Despite the inspiring vision, there are several unavoidable engineering and economic challenges between application and realization.
First, the contradiction between launch costs and deployment scale. Even though Falcon 9 has reduced the cost to about $2,700 per kilogram to orbit, and Starship aims even lower, a satellite node capable of actual computation—containing servers, solar panels, cooling systems, and communication modules—far exceeds typical communication satellites in weight. Deploying hundreds of thousands of such units requires an astronomical number of launches and costs.
Second, the computational bottleneck of space-grade hardware. GPUs and high-bandwidth memory used in ground data centers are not designed for space environments. Cosmic radiation causes single-particle upsets leading to errors; extreme temperature variations (up to 120°C on the sun-facing side and -150°C on the shaded side) pose severe stability challenges. Currently, space-radiation-hardened chips lag behind commercial consumer chips by about two to three generations.
Running large models for inference in orbit still faces fundamental hardware gaps.
Third, cooling is not as simple as imagined. Vacuum environment indeed eliminates the need for cooling water but also means no convective heat transfer, relying solely on radiative cooling. Radiative efficiency depends on surface area and temperature; satellites need large radiators, which further increase weight and size, conflicting with limited launch capacity.
The International Space Station’s cooling system weighs several tons—an example of this challenge.
Fourth, the physical limits of latency and bandwidth. Near-Earth orbit has a one-way delay of about 4 to 20 milliseconds, which seems acceptable, but laser links between satellites have far lower bandwidth than ground fiber optics. A submarine cable can transmit tens of Tbps, while current optical inter-satellite links are still in Gbps range.
For distributed training requiring massive parameter synchronization, this bandwidth gap could be fatal. Space computing is more suitable for latency-tolerant batch inference rather than real-time training.
Fifth, maintenance and upgrades are difficult. Ground data centers can replace disks, upgrade GPUs, and repair nodes at any time. Satellites in orbit, once deployed, are essentially unrecoverable for hardware repairs. When chips are outperformed by next-generation products or components degrade due to radiation, the only “upgrade” is to launch new satellites and retire old ones—bringing us back to launch costs and orbital congestion issues.
While the FCC’s final decision is still months away, this application has already moved the idea of “sending data centers into space” from science fiction to policy agenda. The future ceiling of cloud computing may not be below the sky, but at the unseen edge of the horizon.