Citrini Research analyst Jukan recently pointed out that Musk has leased xAI’s Memphis Colossus 1 compute capacity to Anthropic, behind which may be valuation treatment for a potential SpaceX IPO. Investors don’t like “a money-burning AGI lab,” but they do like “an AI infrastructure landlord that collects stable rent.” If SpaceXAI can prove before the IPO that xAI is not just a R&D unit, but also a new cloud platform that can turn idle compute into high-yield rental income, the market’s view of its cost of capital and its valuation narrative could change completely.
Moreover, this data center in Memphis, Tennessee, mixes three generations of GPUs: H100, H200, and GB200. For large-scale distributed training, a heterogeneous architecture causes a serious “straggler effect”: after the fastest GPUs finish computing, the cluster still has to wait for slower or errored GPUs to catch up before it can move to the next step. But for inference work, there’s no need to keep all GPUs highly synchronized like in training, and tasks can be split across different GPUs. The Information previously said that xAI’s GPU utilization was as low as 11%.
Jukan’s core judgment is that Musk isn’t giving the best training assets to a competitor; instead, he’s leasing out Colossus 1, which has a mixed architecture and lower training efficiency, while keeping Colossus 2—more suitable for training frontier models—for himself. In other words, xAI turns its “training pain points” into “inference rental revenue.”
Musk helps Claude catch up to OpenAI’s compute
According to xAI’s official announcement, Colossus 1 has more than 220k NVIDIA GPUs, including H100, H200, and GB200, and Anthropic will use the full compute power of that data center. Jukan believes the key of this deal is not merely that Anthropic gets another supercomputer, but that the meaning of “deliverable compute” is being re-evaluated.
In the past, OpenAI gained structural advantages with a 30GW long-term compute roadmap, but in a short period of time Anthropic aggressively secured resources such as AWS, Google, Broadcom, Google Cloud, and SpaceXAI, allowing its cumulative committed compute to quickly catch up. Even if Anthropic’s total is still below OpenAI’s 2030 target, Colossus 1 will come online in the near term, giving it immediate strength in expanding inference services.
Jukan further noted that this deal also carries strong strategic implications. Elon Musk remains a key adversary in the OpenAI lawsuit, yet at the same time he is handing 220k GPUs and 300MW of compute to Anthropic, one of OpenAI’s strongest competitors. In other words, Musk undermines OpenAI’s moral legitimacy on the legal battlefield, while helping Anthropic capture OpenAI’s revenue and users on the market battlefield.
Why is xAI willing to hand over Colossus 1?
Jukan offered a more technical explanation. For xAI, Colossus 1 may not be an ideal training cluster, but rather better suited to be rented to Anthropic for inference.
The reason is that Colossus 1 mixes three generations of GPUs—H100, H200, and GB200. For large-scale distributed training, a heterogeneous architecture creates a severe “straggler effect”: after the fastest GPUs finish computing, the cluster still must wait for slower or error-prone GPUs to catch up. The Information previously said xAI’s GPU utilization has been as low as 11%, starkly contrasting with Meta and Google, which can reach over 40% MFU.
But for inference, the problem is much smaller. Inference doesn’t require as tight synchronization across all GPUs as training does, and workloads can be more flexibly partitioned across different GPUs. Therefore, Jukan believes Colossus 1 might be inefficient as a training cluster, but as a single-tenant inference cluster, it can instead become a high-cash-flow asset.
Jukan’s core judgment is that Musk isn’t giving the best training assets to a competitor; he’s leasing out Colossus 1 with a mixed architecture and comparatively lower training efficiency, while keeping Colossus 2—more suitable for training frontier models—for himself. In other words, xAI turns “training pain points” into “inference rental.”
He estimates that at about $2.6 per GPU-hour, leasing Colossus 1 to Anthropic could bring xAI/SpaceXAI about $5.0 to $6.0 billion in annual revenue. Other market estimates are more conservative—for example, Fortune citing estimates by New Street Research analyst Antoine Chkaiban, the deal may bring SpaceX $3.0 to $4.0 billion in annual revenue and more than $2.5 billion in cash profit.
This is crucial for SpaceXAI’s public-listing narrative. Jukan believes investors don’t like “a money-burning AGI lab,” but they will like “an AI infrastructure landlord that collects stable rent.” If SpaceXAI can prove before the IPO that xAI is not just a research and development arm, but a new cloud platform that can convert idle compute into high-yield rental income, the market’s perception of its capital costs and valuation narrative could change completely.
This article, GPU utilization as low as 11%, Musk rents compute to Anthorpic—was it for a valuation packaging ahead of SpaceX’s IPO? First appeared on Liannews ABMedia.
Related News
Cerebras IPO subscription oversubscribed by 20 times; the price range may be raised to $135 per share
SpaceX IPO is imminent, space concept stocks surge wildly, and SpaceMob drives ASTS up 60 times
Whale Lab: DeepSeek and Alibaba’s “fundraising” negotiations failed to reach an agreement
AI chip demand is booming; Cerebras IPO oversubscribed by more than 20 times
Anthropic Targets $1T Valuation as Investors Chase Claude’s Enterprise Growth