Stop saying "AI bubble": it is actually multiple bubbles stacked on top of each other, with each layer bursting at different times.

AI Bubble Bursts in Three Layers: Shell Applications Hit First, Model Layer Faces Consolidation, Only Infrastructure Has Long-Term Value. Companies Must Focus on Workflows and Channels to Survive. This article is adapted and compiled from WEKA AI Lead Val Bercovici’s piece, translated as follows.
(Previous context: I spent a week at CES: all just AI new clothes for nonsense)
(Additional background: The 25 craziest ideas at CES 2026 are all here)

Table of Contents

  • Layer 3: Shell Companies (First to Fail)
  • Layer 2: Foundation Models (Mid-tier)
  • Layer 1: Infrastructure (Withstanding the Test)
  • Network Effects: Why This Matters
  • What It Means for Developers
  • Conclusion

This is the question on everyone’s mind and mouth: Are we in an AI bubble? But that’s the wrong question. The real question should be: which AI bubble are we in, and when will each burst?

The debate over whether AI truly signifies a transformative technology or is an economic time bomb has reached a boiling point. Even tech leaders like Meta CEO Mark Zuckerberg acknowledge signs of an unstable financial bubble forming around AI. OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates also see clear bubble dynamics:

Overexcited investors, inflated valuations, and numerous doomed projects — but they still believe AI will ultimately change the economy.

However, viewing “AI” as a single entity destined to collapse is fundamentally misleading. The AI ecosystem actually consists of three distinctly different layers, each with its own economic model, defensibility, and risk profile. Understanding these layers is crucial because they will not all burst simultaneously.

Layer 3: Shell Companies (First to Fail)

The most fragile link isn’t in developing AI itself, but in “repackaging” it.

These companies connect to OpenAI’s API, add a sleek interface and prompt engineering, then charge $49 per month for products that are essentially “glorified ChatGPT.” Some have achieved early quick success, like Jasper.ai, which packaged GPT models into user-friendly interfaces for marketers, reaching about $42 million in annual recurring revenue (ARR) in the first year.

But cracks are already appearing. These businesses face threats from all sides:

  • Functionality being absorbed: Microsoft could integrate your $50/month AI writing tool into Office 365 tomorrow. Google could turn your AI email assistant into a free Gmail feature. Salesforce could embed your AI sales tool directly into their CRM. When big platforms decide your product is just a “feature” rather than a “standalone product,” your business model can evaporate overnight.

  • Mediocrity trap: Shell companies essentially just pass inputs and outputs. If OpenAI improves prompting, the value of these tools immediately shrinks. As foundational models become more capable and prices continue to fall, profit margins will be squeezed to zero.

  • Zero switching costs: Most shell companies lack proprietary data, embedded workflows, or deep integrations. Customers can switch to competitors or directly use ChatGPT in minutes. There are no moats, no lock-in effects, no defensibility.

The “white-label AI” market exemplifies this fragility. Companies using white-label platforms face vendor lock-in risks from proprietary systems and API restrictions that could hinder integration. These firms are built on rented land; landlords can change terms or demolish the property at any time.

Exception: Cursor is a rare shell company that has built genuine defensibility. By deeply integrating into developers’ workflows, creating proprietary features beyond simple API calls, and establishing strong network effects through user habits and customization, Cursor shows how shell tools can evolve into more substantial products. But companies like Cursor are the minority; most shell companies lack this level of workflow integration and user stickiness.

  • Timeline: Expect large-scale failures in 2025 end to 2026, as big platforms absorb these features and users realize they’re paying a premium for mediocrity.

Layer 2: Foundation Models (Mid-tier)

Companies developing large language models (LLMs): OpenAI, Anthropic, Mistral, occupy a more defensible but still unstable position.

Economist Richard Bernstein cites bubble dynamics using OpenAI as an example, noting the company has conducted about $1 trillion worth of AI transactions (including $500 billion in data center projects), but its revenue is only projected at $13 billion. Bernstein points out that the disconnect between investment and reasonable returns “looks indeed bubble-like.”

However, these companies possess real technological moats: expertise in model training, access to compute resources, and performance advantages. The question is whether these advantages are sustainable or if models will become commoditized to the point where foundational model providers turn into low-margin infrastructure tools.

Engineering will determine winners: As foundational models converge in baseline capabilities, competitive advantage will increasingly come from “inference optimization” and “system engineering.” Companies capable of breaking through memory walls with extended KV cache architectures, achieving high token throughput, and reducing latency will command premiums and market share.

The winners won’t just be those with the largest training scale, but those who can make AI inference economically viable at scale. Breakthroughs in memory management, caching strategies, and infrastructure efficiency will determine which front-line labs survive the consolidation wave.

Another concern is the “cyclicality” of investments. For example, Nvidia invests in funding OpenAI’s data centers, which in turn buy Nvidia chips. Essentially, Nvidia subsidizes one of its biggest customers, potentially inflating actual AI demand artificially.

Nevertheless, these companies have massive capital backing, genuine technological strength, and strategic partnerships with major cloud providers and enterprises. Some will merge, some will be acquired, but this category will persist.

  • Timeline: Mergers and acquisitions from 2026 to 2028, leading to 2-3 dominant players, with smaller model providers being acquired or shuttered.

Layer 1: Infrastructure (Resilient)

Here’s a counterintuitive view: the infrastructure layer—including Nvidia, data centers, cloud providers, memory systems, and AI-optimized storage—is the least bubble-prone part of the AI boom.

Yes, recent estimates show that by 2025, global AI capital expenditure and venture investments will exceed $600 billion, with Gartner estimating total AI-related spending could surpass $1.5 trillion. That sounds like a bubble.

But infrastructure has a key characteristic: regardless of which specific application succeeds, it retains value. Fiber optic cables laid during the dot-com bubble weren’t wasted; they enabled YouTube, Netflix, and cloud computing later on.

Twenty-five years ago, the initial dot-com bubble burst after debt-financed fiber deployments, but the future arrived anyway, and infrastructure was waiting.

Despite stock price pressures, Nvidia’s fiscal Q3 2025 revenue hit about $57 billion, up 22% quarter-over-quarter and 62% year-over-year, with data centers alone generating around $51.2 billion. These aren’t vanity metrics; they reflect real demand from companies investing in foundational infrastructure.

The chips, data centers, memory systems, and storage architectures built today will support any successful AI application in the future—whether today’s chatbots, tomorrow’s autonomous agents, or yet-to-be-imagined uses. Unlike mediocrity in storage, modern AI infrastructure encompasses the entire memory hierarchy—from GPU HBM to DRAM to high-performance storage systems as inference token warehouses. This integration represents a fundamental architectural innovation, not just commoditized competition.

  • Timeline: Short-term overbuilding and inefficient engineering may occur in 2026, but as AI workloads expand over the next decade, long-term value will be preserved.

Network Effects: Why This Matters

The current AI boom won’t end with a dramatic crash. Instead, we’ll see a chain of failures starting from the most vulnerable companies, with early warning signs already visible.

  • Phase 1: Shell and white-label companies face margin compression and functionality absorption. Hundreds of AI startups lacking differentiation will shut down or be sold cheaply. Over 1,300 AI startups are valued over $100 million, including 498 “unicorns” valued over $1 billion, many of which will struggle to justify these valuations.

  • Phase 2: As performance converges, foundational models will begin consolidating, with only the best-funded players surviving. Expect 3-5 major acquisitions by tech giants absorbing promising model companies.

  • Phase 3: Infrastructure spending normalizes but remains high. Some data centers may sit idle for years (like fiber in 2002), but as AI workloads truly expand, they will eventually be filled.

What This Means for Developers

The biggest risk isn’t “becoming” a shell company, but “staying” in the shell stage. If you master user experience, you own users. If you’re developing at the application layer, you need to elevate:

  1. From shell → application layer: Don’t just generate outputs. Master the workflows before and after AI interactions.

  2. From application → vertical SaaS: Build an execution layer that forces users to stay within your product. Create proprietary data, deep integrations, and workflow ownership to make switching painful.

  3. Channel moats: Your true advantage isn’t LLMs but how you acquire, retain, and expand user activity within your platform. Successful AI companies aren’t just software firms—they’re channel companies.


Conclusion

It’s time to stop asking whether we’re in “that” AI bubble. We are in multiple bubbles, each with different characteristics and timelines.

Shell companies will burst first, possibly within 18 months. Foundation models will consolidate over the next 2-4 years. I predict that current infrastructure investments will ultimately prove to be rational long-term, despite some short-term overbuilding pains.

This isn’t pessimism; it’s a blueprint. Knowing which layer you’re in and which bubble you might be caught in is the difference between becoming the next victim and building a resilient business that can weather the shakeout.

The AI revolution is real. But not every company riding the wave will reach the other side.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)