🎉 Gate Square — Share Your Funniest Crypto Moments & Win a $100 Joy Fund!
Crypto can be stressful, so let’s laugh it out on Gate Square.
Whether it’s a liquidation tragedy, FOMO madness, or a hilarious miss—you name it.
Post your funniest crypto moment and win your share of the Joy Fund!
💰 Rewards
10 creators with the funniest posts
Each will receive $10 in tokens
📝 How to Join
1⃣️ Follow Gate_Square
2⃣️ Post with the hashtag #MyCryptoFunnyMoment
3⃣️ Any format works: memes, screenshots, short videos, personal stories, fails, chaos—bring it on.
📌 Notes
Hashtag #MyCryptoFunnyMoment is requi
What’s going on with restaking?
Written by: Kydo, Head of Narrative at EigenCloud
Translated by: Saoirse, Foresight News
From time to time, friends send me sarcastic tweets about restaking, but none of these criticisms really hit the mark. So I decided to write a reflective “rant” myself.
You might think I’m too close to the matter to be objective, or too proud to admit “we miscalculated.” You might believe that even if everyone has declared “restaking has failed,” I’d still write a long essay to defend it, never uttering the word “failure.”
These views are all reasonable, and many have some truth to them.
But this article simply aims to objectively present the facts: what actually happened, what was accomplished, what wasn’t, and what lessons we learned.
I hope the experiences shared here are universal enough to serve as a reference for developers in other ecosystems.
After more than two years of integrating all major AVSs (Actively Validated Services) on EigenLayer and designing EigenCloud, I want to honestly review what we did wrong, what we did right, and where we’re headed next.
What exactly is restaking?
The fact that I still need to explicitly explain “what restaking is” shows that, even when restaking was an industry focal point, we failed to communicate it clearly. This is “Lesson 0”—focus on a core narrative and repeat it consistently.
The Eigen team’s goal has always been “simple in concept, hard in execution”: to enable people to build applications on-chain more securely by making off-chain computation verifiable.
AVS was our first and most explicit attempt at this.
AVS (Actively Validated Service) is a proof-of-stake (PoS) network, where a group of decentralized operators perform off-chain tasks. These operators’ actions are monitored, and if they misbehave, their staked assets are slashed. For this “slashing mechanism” to work, there needs to be “staked capital” to back it.
This is the value of restaking: instead of every AVS building its security from scratch, restaking allows the reuse of already-staked ETH to secure multiple AVSs. This not only lowers capital costs but also speeds up ecosystem bootstrapping.
So, the conceptual framework of restaking can be summarized as:
AVS: the “service layer,” the foundation for a new type of PoS cryptoeconomic security system.
Restaking: the “capital layer,” providing security for these systems by reusing existing staked assets.
I still think this is a brilliant idea, but reality didn’t play out as the diagram suggested—many outcomes fell short of expectations.
What didn’t go as planned
We didn’t want just “any kind of verifiable computation”; we insisted on systems that were decentralized from day one, slashing-based, and fully cryptoeconomically secure.
We hoped AVSs would become “infrastructure services”—just like developers can build SaaS (Software as a Service), anyone could build an AVS.
This principled positioning drastically narrowed our potential developer base.
The result: our target market was “small in scale, slow in progress, high in barrier”—few potential users, high implementation costs, and long timelines for both the team and developers. Whether it was EigenLayer’s infrastructure, developer tools, or each AVS on top, everything took months or even years to build.
Fast-forward nearly three years: we now have only two major AVSs running in production—Infura’s DIN (Decentralized Infrastructure Network) and LayerZero’s EigenZero. This “adoption rate” is far from “broad.”
Honestly, the scenario we envisioned was “teams want cryptoeconomic security and decentralized operators from day one,” but the real market demand is for “more gradual, application-centric” solutions.
When we started, it was the peak of the “Gary Gensler era” (Note: Gary Gensler is the SEC Chair who took a tough stance on crypto). At the time, several staking companies were under investigation and facing lawsuits.
As a “restaking project,” almost every public statement we made could be interpreted as an “investment promise” or “yield advertisement”—potentially attracting subpoenas.
This regulatory fog dictated our communication style: we couldn’t speak freely, even when facing overwhelming negative coverage, being scapegoated by partners, or public backlash—we couldn’t clarify misunderstandings in real time.
We couldn’t even casually say “That’s not how things are”—we had to first weigh legal risks.
As a result, we launched a locked token without adequate communication—looking back, this was indeed risky.
If you ever felt “the Eigen team was evasive or unusually silent” on something, it was probably due to this regulatory climate—even a single wrong tweet could have significant consequences.
Eigen’s early brand influence largely came from Sreeram (core team member)—his energy, optimism, and belief that both systems and people can improve earned huge goodwill.
Billions in staked capital reinforced that trust.
But our joint promotion of the initial batch of AVSs didn’t match this “brand stature.” Many early AVSs made a lot of noise, simply chasing industry trends—they were neither “technically best” nor “most trustworthy” AVS examples.
Over time, people began associating “EigenLayer” with “the latest liquidity mining or airdrop.” Much of today’s skepticism, fatigue, and even aversion traces back to that phase.
If we could do it again, I’d prefer to start with “fewer but higher-quality AVSs,” be more selective with partners who get brand endorsement, and accept “slower, less hyped” promotion.
We tried to build a “perfect, general-purpose slashing system”—universal, flexible, and able to cover all slashing scenarios to achieve “minimal trust.”
But in practice, this led to slow product iteration and required massive time to explain a mechanism “most people weren’t ready to understand.” Even now, we still have to repeatedly educate people about the slashing system we launched almost a year ago.
In hindsight, a better path would’ve been to launch with a simple slashing scheme, let different AVSs try focused models, and gradually increase complexity. But we put “complex design” up front, paying a price in both “speed” and “clarity.”
What we did accomplish
People love to slap a “failure” label on things, but that’s an oversimplification.
In the “restaking” chapter, many things were actually done very well, and these achievements are critical for our future direction.
We prefer “win-win,” but we’re never afraid of competition—if we enter a market, we aim to lead.
In restaking, Paradigm and Lido jointly backed our direct competitor. At the time, EigenLayer’s TVL was under $1 billion.
The competitor had narrative momentum, channels, capital, and “default trust.” Many told me, “Their team will out-execute and crush you.” Reality proved otherwise—now we hold 95% of the restaking capital market and attract 100% of top developers.
In Data Availability (DA), we started later, with a smaller and less-funded team, while the incumbent had a first-mover advantage and a strong marketing machine. Yet today, by any key metric, EigenDA (Eigen’s DA solution) holds a major share of the DA market; with our largest partner coming fully online, this share will grow exponentially.
Both markets were fiercely competitive, but in the end, we broke through.
Launching EigenDA on top of EigenLayer infrastructure was a huge surprise.
It became the cornerstone of EigenCloud and brought something Ethereum badly needed—a massive-scale DA channel. With it, rollups can maintain high throughput while staying within the Ethereum ecosystem, rather than leaving for other new chains for “speed.”
MegaETH launched because the team trusted Sreeram to help them break through DA bottlenecks; when Mantle first proposed building L2 to BitDAO, it was for the same reason.
EigenDA also became Ethereum’s “defense shield”: with a high-throughput, native DA solution within the Ethereum ecosystem, outside chains have a harder time “grabbing attention with the Ethereum narrative while siphoning away ecosystem value.”
One of EigenLayer’s early core topics was how to unlock Ethereum pre-confirmation via EigenLayer.
Since then, pre-confirmation has gained much attention via the Base network, but implementation is still challenging.
To foster ecosystem growth, we co-launched the Commit-Boost initiative—to address the “lock-in effect” of pre-confirmation clients, building a neutral platform where anyone can innovate via validator commitments.
Now, billions of dollars flow through Commit-Boost, with over 35% of validators participating. As mainstream pre-confirmation services launch in the coming months, this ratio will increase further.
This is critical for Ethereum’s “antifragility” and lays the foundation for sustained pre-confirmation market innovation.
Over the years, we’ve safeguarded tens of billions of dollars.
That may sound unremarkable, even “boring”—but considering how many crypto infrastructure projects have “blown up” in various ways, this “boring” reliability is precious. To minimize risk, we built a robust operational security system, hired and trained a world-class security team, and embedded “adversarial thinking” into our culture.
This culture is essential for any business handling user funds, AI, or real-world systems—and it “can’t be added later”—it must be foundational from the start.
The restaking era had an underrated impact: a large amount of ETH flowed to LRT providers, preventing Lido from maintaining a long-term staking share above 33%.
This is crucial for Ethereum’s “social equilibrium.” If Lido held over 33% of staking long-term without reliable alternatives, it would spark major governance disputes and internal strife.
Restaking and LRT didn’t “magically achieve full decentralization,” but they did alter the trend toward staking centralization—a significant accomplishment.
Our biggest “gain” was conceptual: we validated the core thesis that “the world needs more verifiable systems,” but also realized “the right implementation path”—our previous direction was off.
The correct path isn’t “start from general cryptoeconomic security, insist on a fully decentralized operator set from day one, then wait for all businesses to plug in at that level.”
What really accelerates the “frontier” is giving developers direct tools to achieve verifiability for their specific use cases, and matching those tools with appropriate verification primitives. We need to “proactively meet developers’ needs,” not require them to become “protocol designers” from day one.
To that end, we’ve begun building internal modular services—EigenCompute (verifiable compute service) and EigenAI (verifiable AI service). Some features that take other teams hundreds of millions and years to deliver, we can launch in months.
Where we’re headed next
Given these experiences—timing, successes, failures, branding “scars”—how do we move forward?
Here’s a brief outline of our next steps and the logic behind them:
In the future, the entire EigenCloud and all products we build around it will center on the EIGEN token.
EIGEN token’s positioning:
The core economic security driver of EigenCloud.
The asset backing various risks assumed by the platform.
The core value-capture tool for all platform fee flows and economic activity.
Initially, many had expectations of “what value EIGEN could capture” that didn’t match the “actual mechanism”—this led to confusion. In the next phase, we’ll bridge that gap with concrete designs and live systems. More details will be announced soon.
Our core thesis is unchanged: to enable people to build applications on-chain more securely by making off-chain computation verifiable. But the tools to achieve “verifiability” will no longer be limited to just one.
Sometimes it may be cryptoeconomic security; sometimes ZK proofs, TEEs (Trusted Execution Environments), or hybrid approaches. The key is not “championing a single technology,” but making “verifiability” a standard primitive that developers can directly plug into their stack.
Our goal is to narrow the gap between two states:
From “I have an application” to “I have an application that users, partners, or regulators can verify.”
Given the current state of the industry, “cryptoeconomics + TEE” is clearly the best choice—it optimally balances “developer programmability” (what developers can build) and “security” (not just theoretical, but practical, deployable security).
In the future, when ZK proofs and other verification mechanisms mature and meet developer needs, we’ll integrate them into EigenCloud as well.
The biggest transformation in global computing right now is AI—especially AI agents. Crypto cannot stay on the sidelines.
AI agents are essentially “language models wrapped in tools, performing actions in specific environments.”
Today, not only are language models “black boxes,” but the operational logic of AI agents is also opaque—which has already led to hacks where “trusting the developer” is required.
But if AI agents are verifiable, trust in developers is no longer necessary.
Making AI agents verifiable requires three conditions: verifiable LLM inference, verifiable compute environments for action execution, and a data layer that’s verifiable for storage, retrieval, and context understanding.
EigenCloud is designed for these scenarios:
EigenAI: Deterministic, verifiable LLM inference.
EigenCompute: Verifiable execution environments.
EigenDA: Verifiable data storage and retrieval.
We believe “verifiable AI agents” is one of the most competitive application scenarios for our “verifiable cloud”—so we’ve dedicated a team to focus deeply on this area.
To earn real yield, you must take real risk.
We’re exploring broader “staking use cases,” so staked capital can back the following types of risk:
Smart contract risk.
Various types of compute risk.
Risks that are clearly describable and quantifiably priced.
Future yields will reflect “transparent, understandable risk actually undertaken,” not just chase “whatever liquidity mining is trending.”
This logic will naturally be embedded in EIGEN token’s use cases, backing scope, and value flow mechanisms.
Final thoughts
Restaking didn’t become the “universal layer” I (and others) once hoped for, but it hasn’t disappeared. Over a long development journey, it became what most “first-generation products” become:
An important chapter, a set of hard-earned lessons, and infrastructure now supporting broader businesses.
We still maintain restaking-related business and still value it—we’re just no longer confined by the original narrative.
If you’re a community member, AVS developer, or investor still associating Eigen with “that restaking project,” I hope this article gives you clarity on “what happened before” and “where we’re headed now.”
Today, we’re entering a market with a much larger Total Addressable Market (TAM): on one side cloud services, on the other direct developer-facing application layer demand. We’re also exploring the “underdeveloped AI track” and will execute these directions with our trademark intensity.
The team remains full of drive, and I can’t wait to prove all the doubters wrong—we can do it.
I’ve never been more bullish on Eigen, and I continue to increase my EIGEN holdings—I’ll keep doing so in the future.
We’re still just getting started.