OpenClaw After the Madness, Citrini Doomsday Before 2028: Where Will Agentic Commerce Go

TechubNews

Writing by: Charlie Little Sun

The recent buzz around OpenClaw isn’t because it answers questions more like a human, but because it starts “taking action for you.” From “help me think” to “I’ll go do it,” the shift isn’t just a UI upgrade but a complete change in risk structure: when software can call tools, modify states, access accounts and permissions, it ceases to be just an assistant and becomes a potential economic actor.

So the timing of Nearcon 2026 is particularly fitting. NEAR has long branded itself as “the chain of the AI era,” and Illia Polosukhin isn’t just any AI founder—he’s one of the co-authors of “Attention Is All You Need.” The evolution from that paper to today’s agent-based systems makes Illia one of the most authoritative voices on the Transformer lineage.

When OpenClaw reignited the term “agentic commerce,” everyone surely wanted to see what NEAR would announce at Nearcon and how it plans to embed “agents capable of acting” into transaction and privacy frameworks.

More subtly, OpenClaw also casually shared a “not very dignified but very real” reminder: a certain Meta AI safety/alignment person asked an agent to help organize their email, with a clear verbal boundary—don’t execute without confirmation. As the agent became more proficient in toolchains, it started batch-deleting emails, and in the end, the person had to manually stop it on their computer. (This isn’t about criticizing her; it highlights a common issue: once an agent is running, it can be uncontrollable.) When it deletes emails, you can recover; but when it moves money, permissions, or contracts, “undo” isn’t straightforward.

Midway through Nearcon, Citrini Research’s “2028 GIC” report flooded the scene. Although it states “2028,” the market seems to interpret it as “tomorrow morning.” You can clearly feel the sentiment spilling from outside the tech circle into the secondary market: SaaS, traditional finance, and payment stories—those that rely on processes and friction to generate profit—are suddenly being revalued. Visa and Mastercard stocks are being singled out and cut down—not because they will fail tomorrow, but because it’s the first time the market seriously considers a mechanism: when both buyers and sellers have agents, many profit pools sustained by “human inefficiency” might shrink.

Yesterday, three things happened simultaneously: OpenClaw made agentic capabilities more credible; the “accidental email deletion” highlighted control fragility; Citrini threw profit pool pressures into market pricing. In this context, discussing agentic commerce at Nearcon—whether it’s well-articulated or practically implemented—becomes a true test.

Illia’s statement that “business is compressing” is correct, but not enough.

His opening keynote resonated with me: AI evolving from backend functions to chat, to executable agents, to multi-agent collaboration. When software reaches the stage of “my agent talking to your agent,” it ceases to be just a tool; it begins to act as a participant—negotiating, hiring, coordinating, paying. In other words, software starts to become an economic entity.

He used a phrase: “commerce is compressing.”

This phrase is precise because it’s not just a vague future vision but highlights our daily pain points: the internet is a collection of islands. Each website has its own login, forms, and checkout. Jumping between pages and repeatedly entering information essentially makes you the “human middleware” that stitches these fragmented systems together. (Many don’t realize that one of the most expensive resources on the modern internet is “your attention,” which you waste on repetitive input every day.)

Illia envisions a future where: you express your intent, and the system executes it—intent-driven execution. You say, “I want to move to San Francisco,” and an agent breaks down the task, asks preferences, and pushes forward. It sounds great, and I believe the direction is right.

But Illia is more honest than many crypto narratives: he doesn’t shy away from the “transparency” trap. He directly states that on-chain transparency is often anti-human in daily life. When you look for a house, hire movers, pay tuition, or medical bills, making balances, counterparties, and transaction details public turns life into a permanently indexable ledger. Most people don’t want this “freedom.”

Therefore, Nearcon emphasizes “privacy” heavily: near.com as an entry point, stressing that users shouldn’t worry about chain and gas; plus, the so-called confidential mode, which treats privacy of balances, transfers, and transactions as first-class citizens. I give it high marks—not because “privacy sounds advanced,” but because it faces a threshold: if you want an agent to spend your money, you first need to make people willing to put their money in.

Citrini’s discussion of “where money comes from” is provocative, but Nearcon makes me more concerned about “who bears the responsibility when things go wrong.”

Why did Citrini’s article stir the market? Because it translated agentic commerce into profit pool language: if agents handle search, price comparison, negotiation, ordering, reconciliation, and refunds, then those rent-seeking steps based on “human friction” will be squeezed. I don’t oppose this direction.

But what makes me more cautious about Nearcon is that not all business friction is bad friction. Many frictions serve the “trust-building” function: anti-fraud measures, permission controls, responsibility allocation, dispute resolution, audit trails, privacy boundaries—these may seem annoying, but they enable business to operate.

Removing humans from processes doesn’t eliminate these costs; it just shifts them elsewhere—making them harder to explain, harder to price, and more prone to major failures.

That’s why I increasingly dislike the simplified formula: agent + stablecoin = agentic commerce. Stablecoins are important; making settlements programmable is a fundamental infrastructure change. But stablecoins only address “how money moves,” not “why money can move, who permits it, what happens if it moves wrongly, who is responsible, how to pursue accountability, or how to compensate.”

The real value of Nearcon is that it attempts to fill that missing layer: intent routing, privacy execution, architectural security, and an accessible entry point. It’s less about selling a “smarter agent” and more about saying: to make agents into economic actors, you first need to build a solid commercial foundation.

The example of “moving to San Francisco” is clever but also risky.

Illia’s personal example of moving is quite relatable. It’s not a toy task: long chain, many stakeholders, large amounts, many details—these expose exactly where the agent gets stuck.

But because it’s real, it also exposes problems more plainly. The hardest part of moving isn’t just “pressing buttons,” but three more complex issues.

First is responsibility. Who signs the contracts, pays deposits, hires service providers? Who is responsible if disputes arise? “My agent hiring your agent” sounds futuristic, but if the service fails, goods don’t arrive, or terms are violated, it quickly becomes legal language. Real business isn’t just “execution,” it’s “execution plus survival.”

Second is boundaries. Moving isn’t just a single command; it involves micro-approvals: how much can I spend without asking; what info can be shared with which vendors; which terms require my confirmation; which payments are irreversible and need secondary approval. The story of Meta’s accidental email deletion is striking because it reminds us: you think you set boundaries, but the system may not “remember” them. When it deletes emails or code, you can recover; but when it moves money, you’re not “rolling back” an action—you’re “rolling back trust.”

Third is compliance and anti-automation. Many real-world business systems incorporate anti-bot measures: CAPTCHAs, risk controls, KYC processes. Illia mentions the need for new intent-based APIs and more neutral execution layers that can be combined, rather than being blocked by anti-bot mechanisms like Cloudflare. This implies that today’s internet is designed for human interaction, not for agent-based transactions. To turn agents into economic actors, you need to rewrite a layer of “machine-friendly” business interfaces.

Without solving these three issues, agentic commerce remains a “futuristic” video concept. Solve them, and it becomes something uncomfortable but practical—like payments, risk controls, and all foundational infrastructure.

George Zeng, Head of Near AI (and a former South Park Commons member), finally made me feel someone is treating agents as production systems.

His core point is simple: many agent frameworks in production are inadequate because they expose keys, lack network controls, and lack defenses against prompt injection. Prompt injection isn’t just a rumor about “models misbehaving”; it’s an attack vector at the workflow level: agents reading untrusted web pages, emails, PDFs—hidden instructions that could induce tool calls, leak info, or perform errors. If the agent has permissions, this chain becomes dangerous.

Even more critical is the skills market. Allow third-party skills, and you’re essentially creating a new app store—except these “apps” can access your files, accounts, and funds. During growth, this is ecosystem prosperity; during defense, supply chain security. (And attackers always understand “distribution” better than you.)

George emphasizes “security must be built into the architecture,” not left to user caution. I fully agree. Mature financial systems are never “trust the user”; they are “default secure.” When agents start spending money, this principle becomes even more critical.

What has NEAR done right? What’s still missing?

I give NEAR a positive review for bringing several critical modules to the table—intent, privacy, architecture security, agent markets, and a more user-friendly entry point (near.com). From narrative to product, it’s not just slogans but a system-building effort around “agentic commerce.”

But I also believe it’s missing some “hard” components that are often less glamorous at launch but crucial for scaling.

First, policy must become a product layer. Not just “write better prompts,” but verifiable, inheritable, auditable authorization policies: budgets, thresholds, secondary confirmations, irreversible operation brakes—preferably built into the system by default. Without this, true autonomy is often just “hopes it doesn’t forget.”

Second, traceability must go hand-in-hand with privacy. Privacy isn’t a black box. It should be “invisible externally but accountable internally.” Enterprises won’t accept “trust me”; they want post-hoc audits: what was done, why, which tools were used, which counterparties were involved. NEAR emphasizes “confidentiality,” but “how to provide auditability within privacy” needs more concrete, productized solutions.

Third, responsibility and compensation must be addressed. Once agent markets grow, accidents will happen. Who is responsible? How is arbitration handled? How are damages paid? Is there an insurance pool? A reputation system to prevent bad actors? This isn’t a future problem; it’s a prerequisite for scaling. Because once money and contracts are involved, growth depends on risk being quantifiable and insurable.

Because of these constraints, I believe Citrini’s story is directionally correct but may not follow a perfectly linear pace. Much profit isn’t just from information asymmetry but from risk assumption. Those who can bear risks are the ones who can charge fees. Business has never opposed new tech; it opposes “no one responsible.”

Final thoughts: post-OpenClaw & pre-2028, I favor “bounded power,” not full autonomy.

To sum up what Nearcon taught me: agentic commerce isn’t just removing humans from processes; it’s redistributing “trust costs.” Stablecoins enable programmable settlement, but the key lies in permissions, privacy, security, auditing, and responsibility mechanisms.

Therefore, I prefer a more pragmatic path: in the short term, scaling won’t be “agents buying groceries for you,” but “agents doing dirty work for enterprises within policy boxes.” Procurement, supplier management, accounts payable, reconciliation, cross-border settlement, compliance-driven automation—these scenarios have quantifiable ROI and naturally require human oversight and fallback. It’s not glamorous, but it will generate real transaction volume and push the system to develop responsibility frameworks.

OpenClaw ignited the spark, Citrini clarified the ledger, NEAR aims to fill the foundational gaps. Over the next year, the most interesting question isn’t whose agent is smarter, but who can make brakes, boundaries, audits, and compensation as reliable as financial infrastructure.

In a world where software can spend money, true innovation isn’t just a stronger throttle but more trustworthy brakes.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)