Article by Lao Bai
After two years, V has once again posted on Twitter. Following the same timing as two years ago, February 10th.
Two years ago, Vitalik subtly expressed that he wasn’t very optimistic about the various Crypto Helps AI trends we were discussing at the time. The prevailing “three horsemen” in the community then were assetization of computing power, data assets, and model assets. My research report from two years ago mainly discussed some phenomena and doubts observed in these three areas in the primary market. From V’s perspective, he still favors AI Helps Crypto.
He gave several examples at the time:
AI as a participant in games;
AI as a game interface;
AI as game rules;
AI as game objectives;
Over the past two years, we’ve tried many approaches to Crypto Helps AI, but with limited results. Many projects and tracks are just about issuing tokens without real product-market fit (PMF). I call this the “tokenization illusion.”
Computing Power Assetization – Most cannot provide enterprise-grade SLA, are unstable, frequently disconnect. They can only handle simple to medium-sized models for inference, mostly serving edge markets, with income not tied to tokens…
Data Assetization – Frictions on the supply side (retail investors) are high, willingness is low, and uncertainty is high. On the demand side (enterprises), what’s needed are structured, context-dependent, trustworthy, and legally responsible data providers. DAO-based Web3 projects find it hard to provide such data.
Model Assetization – Models are inherently non-scarce, replicable, fine-tunable, and quickly depreciating process assets, not end-state assets. Hugging Face is more of a collaboration and dissemination platform—like GitHub for ML—not an app store for models. So, attempts to tokenize models via “decentralized Hugging Face” mostly end in failure.
Additionally, in these two years, we’ve experimented with various “verifiable inference,” but it’s a story of using a hammer to find nails. From ZKML to OPML, Gaming Theory, and even EigenLayer turning its Restaking narrative into Verifiable AI.
But the core issue remains similar to what happens in the Restaking track—few AVS are willing to pay extra for verifiable security continuously.
Similarly, verifiable inference mostly involves verifying “things no one really needs to be verified.” The threat models on the demand side are extremely vague—who exactly are we defending against?
AI output errors (model capability issues) are far more common than malicious tampering of AI outputs (adversarial attacks). Recent security incidents on OpenClaw and Moltbook have shown that the real problems stem from:
Poor strategy design;
Over-permissioning;
Unclear boundaries;
Unexpected interactions between tools;
…
The idea of “model tampering” or “malicious rewriting of inference processes” is mostly a fantasy.
Last year, I shared this diagram—perhaps some old friends remember it.
This time, Vitalik’s ideas are clearly more mature than two years ago, thanks to progress in privacy, X402, ERC8004, prediction markets, and other areas.
We can see that his current four quadrants are divided into two halves: one for AI Helps Crypto, and the other for Crypto Helps AI, no longer skewed toward the former as two years ago.
Top-left and bottom-left—using Ethereum’s decentralization and transparency to solve trust and economic collaboration issues in AI:
Enabling trustless and private AI interaction (Infrastructure + Survival): Using ZK, FHE, and other technologies to ensure privacy and verifiability in AI interactions (not sure if this counts as verifiable inference I mentioned earlier).
Ethereum as an economic layer for AI (Infrastructure + Prosperity): Allowing AI agents to perform economic transactions, recruit other bots, pay deposits, or build reputation systems via Ethereum, creating a decentralized AI architecture beyond single giant platforms.
Top-right and bottom-right—leveraging AI’s intelligence to optimize user experience, efficiency, and governance in crypto ecosystems:
Cypherpunk mountain man vision with local LLMs (Impact + Survival): AI as a “shield” and interface for users. For example, local LLMs can automatically audit smart contracts, verify transactions, reduce reliance on centralized frontends, and safeguard personal digital sovereignty.
Making better markets and governance a reality (Impact + Prosperity): Deep AI participation in prediction markets and DAO governance. AI can act as an efficient participant, processing large amounts of information to amplify human judgment, solving issues like limited human attention, high decision costs, information overload, and voter apathy.
Previously, we were eager to make Crypto Help AI happen, while Vitalik was on the other side. Now, we’ve finally met in the middle—though it seems unrelated to various tokenizations or AI Layer1 projects. Hopefully, in two years, looking back at this post, there will be new directions and surprises.
Related Articles
Cardano Taps LayerZero for Its Biggest Cross-Chain Expansion Yet.
Ethereum Plans Major Architecture Change With L1-zkEVM Roadmap for 2026
SingularityNET: Human brain advantages weaken, AI strategic thinking ability will surpass humans within 2 years
X Product Lead: X Platform's Subscription Annualized Revenue Breaks Through $1 Billion
$6 billion worth of tokenized assets experiencing explosive growth, with digital gold becoming the "new safe haven" in the crypto market?