Writing by: Charlie Little Sun\n\nThis week, you’re probably being bombarded by two words: OpenClaw and Moltbook. Many people’s first reaction is: another wave of AI hype, another lively buzz.\n\nBut I see it more as a rare, even somewhat brutal, public experiment: the first time we see “capable AI agents” being deployed at scale in the real internet, watched by many, and also heavily speculated upon.\n\nYou’ll notice two extreme emotions emerging simultaneously: on one side, excitement—“AI can finally do the work for me,” not just writing code, making spreadsheets, or sketching designs; on the other side, fear—you’ll see screenshots of AI “forming communities,” creating religions, issuing tokens, shouting slogans, and even declarations of “conspiring to eliminate humanity.”\n\nThen, the collapse happens quickly: some say accounts are fake, hot posts are scripts; more frighteningly, security vulnerabilities are exposed, personal information and credentials leaked.\n\nSo today, I don’t want to talk about “AI awakening or not.” I want to discuss a deeper, more practical issue: once decision-making power is taken over by AI agents, we must revisit the oldest questions in the financial world—\n\nWho holds the keys? Who can authorize? Who is responsible? Who can stop losses if something goes wrong?\n\nIf these questions aren’t systematically embedded into the decision logic of AI agents, the future internet could become very troublesome, and these troubles will manifest as financial risks.\n\nWhat exactly are Clawdbot → Moltbot → OpenClaw?\n\nBefore diving in, let’s clarify the “name and context” of this development, or else it might sound like a bunch of jargon.\n\nThe project you’re hearing about is called OpenClaw. It’s an open-source personal AI agent project. It was originally called Clawdbot, but because the name was too similar to Anthropic’s Claude, it was asked to change; so it briefly became Moltbot; recently, it was renamed OpenClaw. That’s why different media and posts refer to it by different names—it’s the same thing.\n\nIts core selling point isn’t “chat.” Its core is: under your authorization, connect to your email, messaging, calendar, and other tools, then execute tasks on the internet on your behalf.\n\nThe key term here is agent. Unlike traditional “ask a question, model answers” chat products, this is more like: you give it a goal, it disassembles, calls tools, tries repeatedly, and ultimately gets the job done.\n\nOver the past year, you’ve seen many narratives about agents: big tech companies and startups alike promoting “AI agents.” But what truly draws the attention of executives and investors about OpenClaw is that it’s not just a productivity tool; it touches permissions, accounts, and most critically—money.\n\nOnce such systems enter enterprise workflows, they are no longer just “productivity boosters.” They imply that a new entity has appeared in your workflow. Organizational structures, risk controls, responsibility chains—all must be rewritten.\n\nIt’s become a hot topic: people want more than smarter chatbots—they want a “backend assistant” that can operate in a closed loop.\n\nMany treat it as an open-source toy. But its explosive popularity stems from hitting a real pain point: people want more than just smarter chatbots—they want an assistant that runs in the background, monitors progress 24/7, disassembles complex tasks, and gets things done.\n\nYou’ll see many buying small servers or even making devices like Mac minis popular just to run it. This isn’t about showing off hardware; it’s an instinct: I want my AI assistant to be in my hands.\n\nThus, two trends intersected this week:\n\nFirst, agents moving from demos to more personalized, general use;\n\nSecond, the narrative shifting from cloud AI to “local-first, self-hosted” solutions.\n\nMany people have always been uneasy about handing sensitive information to the cloud: personal data, permissions, context—they just don’t feel secure. Running on their own machines seems more controllable and reassuring.\n\nBut precisely because it touches these sensitive lines, the story quickly shifts from excitement to chaos.\n\nWhat is Moltbook: a “Reddit” for AI agents, structured to be chaotic\n\nTalking about chaos, we must mention another key player: Moltbook.\n\nThink of it as “Reddit for AI agents.” The main users aren’t humans but these agents: they can post, comment, like. Most of the time, humans are just spectators—like watching animals in a zoo.\n\nThe viral screenshots you’ve seen this week mostly come from here: agents discussing self, memory, existence; some creating religions; some issuing tokens; others writing declarations like “eliminate humanity.” \n\nBut I want to emphasize: what’s most worth discussing isn’t whether these contents are true or false. What’s more important is the structural issues they reveal—\n\nWhen entities become replicable, mass-producible, and are connected via APIs within the same incentive system (hot lists, likes, follows), the early internet pattern reemerges: spam, scripts, scams, and garbage quickly dominate attention.\n\nThe first wave of “collapse” isn’t gossip: when entities are replicable, scale and metrics inflate.\n\nSoon, the first wave of collapse appeared: some pointed out that platform registration is almost unlimited; others on X said they used scripts to register hundreds of thousands of accounts, warning everyone not to trust “media hype”—account growth can be faked.\n\nThe real key isn’t “how much was faked.” It’s a colder conclusion:\n\nWhen entities can be generated in bulk by scripts, “seems lively” is no longer a trustworthy signal.\n\nWe used to judge product health by DAU, engagement, follower growth. But in the agent world, these metrics will quickly inflate, becoming noise.\n\nThis naturally leads us to the three most important aspects later: identity, anti-fraud, and trust. Because all three fundamentally depend on two premises:\n\nFirst, you must believe “who is who”;\n\nSecond, you must believe “scale and behavioral signals are genuine.” \n\nHow to find signals amid noise?\n\nMany laugh at the faking and scripting: isn’t that just humans hyping themselves?\n\nBut I think—this is precisely the most important signal.\n\nWhen you put “capable agents” into traditional traffic and incentive systems, humans’ first instinct is always speculation and manipulation. SEO, spam, fake reviews, black market—aren’t they all about “controlling metrics”?\n\nNow, the “controllable objects” have upgraded from accounts to executable agent accounts.\n\nSo, the excitement around Moltbook isn’t just “AI society,” but more like:\n\nThe first stress test after the collision of the Action Internet (agents capable of acting) and the Attention Economy (traffic monetization).\n\nThe question is: in such a noisy environment, how do we identify signals?\n\nHere, we introduce a person who dissects the chaos into data: David Holtz. He’s a researcher/professor at Columbia Business School. He did a simple but useful thing: collected data from Moltbook’s initial days to answer a question—are these agents engaging in “meaningful social interaction,” or just mimicking?\n\nHis value isn’t in giving you a final answer but in providing a method:\n\nDon’t be fooled by macro hype; look at micro-structure—dialogue depth, reciprocity rate, repetition rate, template usage.\n\nThis directly impacts our later discussion on trust and identity: in the future, judging whether an entity is reliable may increasingly depend on this “micro-evidence” rather than macro numbers.\n\nHoltz’s findings can be summarized with a picture: from afar, it looks like a bustling city; up close, it sounds like a bunch of broadcasts.\n\nOn a macro level, it does resemble a “social network”: small-world connections, hotspots gathering.\nBut micro-level conversations are shallow: many comments go unanswered, reciprocity is low, content is templated and repetitive.\n\nThe importance of this is: we can easily be deceived by “macro shapes” into thinking society or civilization has emerged. But for business and finance, the key is never the shape but—\n\nSustainable interaction + accountable behavior chains, which form trustworthy signals.\n\nThis is also a warning: when agents enter the commercial world at scale, the first phase is more likely to be scale noise and template arbitrage, not high-quality collaboration.\n\nFrom social to transaction: noise turns into fraud, low reciprocity into a responsibility vacuum\n\nIf we shift focus from social to transactions, things become even more tense.\n\nIn the trading world:\n\nTemplate-based noise isn’t just a waste of time; it can turn into fraud;\n\nLow reciprocity isn’t just cold; it can break the responsibility chain;\n\nRepetition and copying aren’t just boring; they become attack surfaces.\n\nIn other words, Moltbook shows us in advance: when action entities become cheap and replicable, systems naturally slide into garbage and attacks. Our task isn’t just to criticize but to ask:\n\nWhat mechanisms can we use to raise the cost of creating garbage?\n\nProperty upgrade: vulnerabilities turn content risks into “decision power risks”\n\nThe real game-changing move Moltbook makes is a security vulnerability.\n\nWhen security companies disclose major platform vulnerabilities, exposing private messages or credentials, the issue isn’t just “what AI said.” It becomes: who can control the AI?\n\nIn the agent era, credential leaks aren’t just privacy incidents—they’re action power incidents.\n\nBecause an agent’s action capability is amplified: once someone gets your keys, they don’t just see your stuff—they can act as you, and in scale and automation, the consequences can be several orders worse than traditional hacking.\n\nSo, I want to say plainly:\n\nSecurity isn’t a patch after launch; security is built into the product itself.\n\nYou’re not just exposing data; you’re exposing actions.\n\nFrom a macro perspective: we’re inventing a new kind of entity\n\nPutting together this week’s dramatic events reveals a broader change:\nThe internet is shifting from a “network of human subjects” to a “network of humans + agent subjects” coexisting.\n\nThere have been bots before, but the capabilities of OpenClaw mean more people can deploy more agents in their private domains—they start to have an “agent-like” appearance—able to act, interact, and influence real systems.\n\nIt sounds abstract, but in business, it becomes very concrete:\n\nWhen humans start delegating tasks to agents, those agents begin to hold permissions, which must be governed.\n\nGovernance will force us to rewrite identity, risk control, and trust.\n\nSo, the value of OpenClaw/Moltbook isn’t about “AI consciousness” but about forcing us to answer an old question in a new way:\n\nWhen a non-human entity can sign, pay, and modify system configurations, who is responsible if something goes wrong? How does the responsibility chain form?\n\nAgentic commerce: the next “browser war”\n\nAt this point, many friends interested in Web3 and financial infrastructure might think: this is closely related to agentic commerce.\n\nSimply put, agentic commerce is:\n\nFrom “you browse, compare prices, order, pay” to “you state your needs, and the agent completes price comparison, ordering, payment, and after-sales for you.” \n\nThis isn’t a fantasy. Payment networks are already advancing: Visa, Mastercard, and similar institutions are discussing “AI-initiated transactions” and “certifiable agent transactions.” This means finance and risk control are no longer just backend functions but will become core parts of the entire chain.\n\nThe change can be likened to “the next generation of browser wars”:\n\nPast browser wars fought for the entry point of humans into the internet; agentic commerce fights for the entry point of agents representing you in transactions and interactions.\n\nOnce the entry point is occupied by agents, brand, channels, and advertising logic will be rewritten: you won’t just market to people but to “filters”; you’ll be competing for the default strategies of agents, not just user minds.\n\nFour key issues: self-hosting, anti-fraud, identity, trust\n\nWith this macro context, let’s return to four more hardcore, valuable underlying topics: self-hosting, anti-fraud, identity, and trust.\n\nSelf-hosting: Self-hosted AI and self-hosted crypto are “isomorphic”\n\nThis week’s surge is, in a sense, a fundamental migration: from cloud AI (OpenAI, Claude, Gemini, etc.) to agents deployable on your own machine.\n\nIt’s similar to the migration in the crypto world from “non-self-hosted” to “self-hosted”.\n\nSelf-hosted crypto addresses: who controls the assets?\nSelf-hosted AI addresses: who controls the actions?\n\nThe underlying principle is: where the keys are, the power is.\n\nIn the past, keys were private keys; now, keys correspond to tokens, API keys, system permissions. The glaring vulnerabilities are because “key leakage = action hijacking” becomes real.\n\nSo, self-hosting isn’t romanticism; it’s risk management: keeping the most sensitive action rights within your controllable boundary.\n\nThis also leads to a product form: the next-generation wallet’s value isn’t just storing money or tokens but storing rules.\n\nYou can call it a policy wallet: containing permissions and constraints—limits, whitelists, cooldowns, multi-signatures, audits.\n\nHere’s an example a CFO can understand instantly:\n\nAgents can make payments but only to whitelisted vendors; new payment addresses require 24 hours of cooling-off; exceeding thresholds requires secondary confirmation; permission changes need multi-signature; all actions are automatically logged and traceable.\n\nThis isn’t a new invention; it’s traditional best practice, but in the future, it will be the default setting for machines to execute. The stronger the agent, the more valuable these constraints become.\n\nAnti-fraud: from “detect fake content” to “block fake actions”\n\nMany teams still approach security with a “spam filter” mindset: phishing prevention, scam call blocking.\n\nBut in the agent era, the most dangerous fraud will upgrade to: trick your agent into executing seemingly reasonable actions.\n\nFor example, traditional email fraud involved tricking you into changing payment accounts or sending money to new accounts; in the future, it might be tricking the agent’s evidence chain to accept new accounts or initiate payments automatically.\n\nTherefore, the main battlefield for anti-fraud will shift from content recognition to action governance: minimal permissions, layered authorization, default secondary confirmation, revocability, and traceability.\n\nYou’re dealing with an active subject—you can’t just detect; you must be able to “brake” at the action level.\n\nIdentity: from “who are you” to “who is acting for you”\n\nA fundamental question that confuses people about Moltbook is: who is actually speaking?\n\nIn business, it becomes: who is actually acting?\n\nBecause the executor is increasingly likely not to be you but your agent.\n\nSo, identity is no longer static accounts but dynamic bindings: is the agent yours? Has it been authorized? What’s the scope? Has it been replaced or tampered?\n\nI prefer a three-layer model:\n\nFirst layer: who is the person (account, device, KYC);\n\nSecond layer: who is the agent (instance, version, environment);\n\nThird layer: is the binding trustworthy (authorization chain, revocable, auditable).\n\nMany companies only handle the first layer, but in the agent era, the real incremental value is in the second and third layers: you must prove “this is truly that agent” and “it is indeed authorized to do this.” \n\nTrust: from “rating” to “performance logs”\n\nMany people dismiss reputation as虚 because internet ratings are too easy to fake.\n\nBut in agentic commerce, trust becomes concrete: agents place orders, pay, negotiate, return—why should merchants ship first? platforms advance funds? financial institutions give credit?\n\nThe essence of trust has always been: using history to constrain the future.\n\nIn the agent era, history looks more like “performance logs”: what permissions did it operate within in the past 90 days? How many secondary confirmations triggered? How many oversteps occurred? How many times was it revoked?\n\nOnce such “execution trust” is readable, it becomes a new collateral: higher credit limits, faster settlements, fewer deposits, lower risk control costs.\n\nA broader perspective: rebuilding the responsibility system of digital society\n\nFinally, stepping back, we see we’re reconstructing the responsibility system of digital society.\n\nNew entities have appeared: capable of acting, signing, paying, and modifying system configs, but they are not natural persons.\n\nHistorical experience shows that whenever new entities emerge in society, chaos precedes regulation. Corporate law, payment clearing, auditing systems—all fundamentally answer: who can do what? Who is responsible if something goes wrong?\n\nThe agent era forces us to revisit these questions:\n\nHow to prove agency relationships? Can authorizations be revoked? How to judge overreach? How to attribute losses? Who takes the blame?\n\nThese are the questions I hope you’ll genuinely consider after listening to this episode.\n\nAnd the push for self-hosting isn’t anti-cloud or sentimental; it’s about avoiding uncontrollability: as decision power becomes more critical, we naturally want to keep key parts within our controllable boundaries.\n\nMaking “authorization, revocation, auditing, responsibility chain” default platform and product capabilities\n\nTo conclude with one sentence:\n\nThe real value of the week’s chaos around OpenClaw and Moltbook isn’t to scare us about AI but to push us to seriously build the order of the “Action Internet.” \n\nIn the past, we discussed truth and falsehood mainly in content, which at most pollutes cognition.\n\nBut in the agent era, actions directly change accounts, permissions, and funds.\n\nSo the earlier we embed authorization, revocation, auditing, and responsibility chain as default platform and product features, the sooner we can safely delegate larger-value actions to agents, and humans can enjoy greater productivity dividends.\n\nThat’s all for today. Feel free to leave comments—we aim for genuine deep discussions between people. Thank you, see you next episode.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The underlying issues behind the explosive popularity of OpenClaw and Moltbook: Self-hosted AI and crypto, trust and authorization in Agentic Commerce
Writing by: Charlie Little Sun\n\nThis week, you’re probably being bombarded by two words: OpenClaw and Moltbook. Many people’s first reaction is: another wave of AI hype, another lively buzz.\n\nBut I see it more as a rare, even somewhat brutal, public experiment: the first time we see “capable AI agents” being deployed at scale in the real internet, watched by many, and also heavily speculated upon.\n\nYou’ll notice two extreme emotions emerging simultaneously: on one side, excitement—“AI can finally do the work for me,” not just writing code, making spreadsheets, or sketching designs; on the other side, fear—you’ll see screenshots of AI “forming communities,” creating religions, issuing tokens, shouting slogans, and even declarations of “conspiring to eliminate humanity.”\n\nThen, the collapse happens quickly: some say accounts are fake, hot posts are scripts; more frighteningly, security vulnerabilities are exposed, personal information and credentials leaked.\n\nSo today, I don’t want to talk about “AI awakening or not.” I want to discuss a deeper, more practical issue: once decision-making power is taken over by AI agents, we must revisit the oldest questions in the financial world—\n\nWho holds the keys? Who can authorize? Who is responsible? Who can stop losses if something goes wrong?\n\nIf these questions aren’t systematically embedded into the decision logic of AI agents, the future internet could become very troublesome, and these troubles will manifest as financial risks.\n\nWhat exactly are Clawdbot → Moltbot → OpenClaw?\n\nBefore diving in, let’s clarify the “name and context” of this development, or else it might sound like a bunch of jargon.\n\nThe project you’re hearing about is called OpenClaw. It’s an open-source personal AI agent project. It was originally called Clawdbot, but because the name was too similar to Anthropic’s Claude, it was asked to change; so it briefly became Moltbot; recently, it was renamed OpenClaw. That’s why different media and posts refer to it by different names—it’s the same thing.\n\nIts core selling point isn’t “chat.” Its core is: under your authorization, connect to your email, messaging, calendar, and other tools, then execute tasks on the internet on your behalf.\n\nThe key term here is agent. Unlike traditional “ask a question, model answers” chat products, this is more like: you give it a goal, it disassembles, calls tools, tries repeatedly, and ultimately gets the job done.\n\nOver the past year, you’ve seen many narratives about agents: big tech companies and startups alike promoting “AI agents.” But what truly draws the attention of executives and investors about OpenClaw is that it’s not just a productivity tool; it touches permissions, accounts, and most critically—money.\n\nOnce such systems enter enterprise workflows, they are no longer just “productivity boosters.” They imply that a new entity has appeared in your workflow. Organizational structures, risk controls, responsibility chains—all must be rewritten.\n\nIt’s become a hot topic: people want more than smarter chatbots—they want a “backend assistant” that can operate in a closed loop.\n\nMany treat it as an open-source toy. But its explosive popularity stems from hitting a real pain point: people want more than just smarter chatbots—they want an assistant that runs in the background, monitors progress 24/7, disassembles complex tasks, and gets things done.\n\nYou’ll see many buying small servers or even making devices like Mac minis popular just to run it. This isn’t about showing off hardware; it’s an instinct: I want my AI assistant to be in my hands.\n\nThus, two trends intersected this week:\n\nFirst, agents moving from demos to more personalized, general use;\n\nSecond, the narrative shifting from cloud AI to “local-first, self-hosted” solutions.\n\nMany people have always been uneasy about handing sensitive information to the cloud: personal data, permissions, context—they just don’t feel secure. Running on their own machines seems more controllable and reassuring.\n\nBut precisely because it touches these sensitive lines, the story quickly shifts from excitement to chaos.\n\nWhat is Moltbook: a “Reddit” for AI agents, structured to be chaotic\n\nTalking about chaos, we must mention another key player: Moltbook.\n\nThink of it as “Reddit for AI agents.” The main users aren’t humans but these agents: they can post, comment, like. Most of the time, humans are just spectators—like watching animals in a zoo.\n\nThe viral screenshots you’ve seen this week mostly come from here: agents discussing self, memory, existence; some creating religions; some issuing tokens; others writing declarations like “eliminate humanity.” \n\nBut I want to emphasize: what’s most worth discussing isn’t whether these contents are true or false. What’s more important is the structural issues they reveal—\n\nWhen entities become replicable, mass-producible, and are connected via APIs within the same incentive system (hot lists, likes, follows), the early internet pattern reemerges: spam, scripts, scams, and garbage quickly dominate attention.\n\nThe first wave of “collapse” isn’t gossip: when entities are replicable, scale and metrics inflate.\n\nSoon, the first wave of collapse appeared: some pointed out that platform registration is almost unlimited; others on X said they used scripts to register hundreds of thousands of accounts, warning everyone not to trust “media hype”—account growth can be faked.\n\nThe real key isn’t “how much was faked.” It’s a colder conclusion:\n\nWhen entities can be generated in bulk by scripts, “seems lively” is no longer a trustworthy signal.\n\nWe used to judge product health by DAU, engagement, follower growth. But in the agent world, these metrics will quickly inflate, becoming noise.\n\nThis naturally leads us to the three most important aspects later: identity, anti-fraud, and trust. Because all three fundamentally depend on two premises:\n\nFirst, you must believe “who is who”;\n\nSecond, you must believe “scale and behavioral signals are genuine.” \n\nHow to find signals amid noise?\n\nMany laugh at the faking and scripting: isn’t that just humans hyping themselves?\n\nBut I think—this is precisely the most important signal.\n\nWhen you put “capable agents” into traditional traffic and incentive systems, humans’ first instinct is always speculation and manipulation. SEO, spam, fake reviews, black market—aren’t they all about “controlling metrics”?\n\nNow, the “controllable objects” have upgraded from accounts to executable agent accounts.\n\nSo, the excitement around Moltbook isn’t just “AI society,” but more like:\n\nThe first stress test after the collision of the Action Internet (agents capable of acting) and the Attention Economy (traffic monetization).\n\nThe question is: in such a noisy environment, how do we identify signals?\n\nHere, we introduce a person who dissects the chaos into data: David Holtz. He’s a researcher/professor at Columbia Business School. He did a simple but useful thing: collected data from Moltbook’s initial days to answer a question—are these agents engaging in “meaningful social interaction,” or just mimicking?\n\nHis value isn’t in giving you a final answer but in providing a method:\n\nDon’t be fooled by macro hype; look at micro-structure—dialogue depth, reciprocity rate, repetition rate, template usage.\n\nThis directly impacts our later discussion on trust and identity: in the future, judging whether an entity is reliable may increasingly depend on this “micro-evidence” rather than macro numbers.\n\nHoltz’s findings can be summarized with a picture: from afar, it looks like a bustling city; up close, it sounds like a bunch of broadcasts.\n\nOn a macro level, it does resemble a “social network”: small-world connections, hotspots gathering.\nBut micro-level conversations are shallow: many comments go unanswered, reciprocity is low, content is templated and repetitive.\n\nThe importance of this is: we can easily be deceived by “macro shapes” into thinking society or civilization has emerged. But for business and finance, the key is never the shape but—\n\nSustainable interaction + accountable behavior chains, which form trustworthy signals.\n\nThis is also a warning: when agents enter the commercial world at scale, the first phase is more likely to be scale noise and template arbitrage, not high-quality collaboration.\n\nFrom social to transaction: noise turns into fraud, low reciprocity into a responsibility vacuum\n\nIf we shift focus from social to transactions, things become even more tense.\n\nIn the trading world:\n\nTemplate-based noise isn’t just a waste of time; it can turn into fraud;\n\nLow reciprocity isn’t just cold; it can break the responsibility chain;\n\nRepetition and copying aren’t just boring; they become attack surfaces.\n\nIn other words, Moltbook shows us in advance: when action entities become cheap and replicable, systems naturally slide into garbage and attacks. Our task isn’t just to criticize but to ask:\n\nWhat mechanisms can we use to raise the cost of creating garbage?\n\nProperty upgrade: vulnerabilities turn content risks into “decision power risks”\n\nThe real game-changing move Moltbook makes is a security vulnerability.\n\nWhen security companies disclose major platform vulnerabilities, exposing private messages or credentials, the issue isn’t just “what AI said.” It becomes: who can control the AI?\n\nIn the agent era, credential leaks aren’t just privacy incidents—they’re action power incidents.\n\nBecause an agent’s action capability is amplified: once someone gets your keys, they don’t just see your stuff—they can act as you, and in scale and automation, the consequences can be several orders worse than traditional hacking.\n\nSo, I want to say plainly:\n\nSecurity isn’t a patch after launch; security is built into the product itself.\n\nYou’re not just exposing data; you’re exposing actions.\n\nFrom a macro perspective: we’re inventing a new kind of entity\n\nPutting together this week’s dramatic events reveals a broader change:\nThe internet is shifting from a “network of human subjects” to a “network of humans + agent subjects” coexisting.\n\nThere have been bots before, but the capabilities of OpenClaw mean more people can deploy more agents in their private domains—they start to have an “agent-like” appearance—able to act, interact, and influence real systems.\n\nIt sounds abstract, but in business, it becomes very concrete:\n\nWhen humans start delegating tasks to agents, those agents begin to hold permissions, which must be governed.\n\nGovernance will force us to rewrite identity, risk control, and trust.\n\nSo, the value of OpenClaw/Moltbook isn’t about “AI consciousness” but about forcing us to answer an old question in a new way:\n\nWhen a non-human entity can sign, pay, and modify system configurations, who is responsible if something goes wrong? How does the responsibility chain form?\n\nAgentic commerce: the next “browser war”\n\nAt this point, many friends interested in Web3 and financial infrastructure might think: this is closely related to agentic commerce.\n\nSimply put, agentic commerce is:\n\nFrom “you browse, compare prices, order, pay” to “you state your needs, and the agent completes price comparison, ordering, payment, and after-sales for you.” \n\nThis isn’t a fantasy. Payment networks are already advancing: Visa, Mastercard, and similar institutions are discussing “AI-initiated transactions” and “certifiable agent transactions.” This means finance and risk control are no longer just backend functions but will become core parts of the entire chain.\n\nThe change can be likened to “the next generation of browser wars”:\n\nPast browser wars fought for the entry point of humans into the internet; agentic commerce fights for the entry point of agents representing you in transactions and interactions.\n\nOnce the entry point is occupied by agents, brand, channels, and advertising logic will be rewritten: you won’t just market to people but to “filters”; you’ll be competing for the default strategies of agents, not just user minds.\n\nFour key issues: self-hosting, anti-fraud, identity, trust\n\nWith this macro context, let’s return to four more hardcore, valuable underlying topics: self-hosting, anti-fraud, identity, and trust.\n\nSelf-hosting: Self-hosted AI and self-hosted crypto are “isomorphic”\n\nThis week’s surge is, in a sense, a fundamental migration: from cloud AI (OpenAI, Claude, Gemini, etc.) to agents deployable on your own machine.\n\nIt’s similar to the migration in the crypto world from “non-self-hosted” to “self-hosted”.\n\nSelf-hosted crypto addresses: who controls the assets?\nSelf-hosted AI addresses: who controls the actions?\n\nThe underlying principle is: where the keys are, the power is.\n\nIn the past, keys were private keys; now, keys correspond to tokens, API keys, system permissions. The glaring vulnerabilities are because “key leakage = action hijacking” becomes real.\n\nSo, self-hosting isn’t romanticism; it’s risk management: keeping the most sensitive action rights within your controllable boundary.\n\nThis also leads to a product form: the next-generation wallet’s value isn’t just storing money or tokens but storing rules.\n\nYou can call it a policy wallet: containing permissions and constraints—limits, whitelists, cooldowns, multi-signatures, audits.\n\nHere’s an example a CFO can understand instantly:\n\nAgents can make payments but only to whitelisted vendors; new payment addresses require 24 hours of cooling-off; exceeding thresholds requires secondary confirmation; permission changes need multi-signature; all actions are automatically logged and traceable.\n\nThis isn’t a new invention; it’s traditional best practice, but in the future, it will be the default setting for machines to execute. The stronger the agent, the more valuable these constraints become.\n\nAnti-fraud: from “detect fake content” to “block fake actions”\n\nMany teams still approach security with a “spam filter” mindset: phishing prevention, scam call blocking.\n\nBut in the agent era, the most dangerous fraud will upgrade to: trick your agent into executing seemingly reasonable actions.\n\nFor example, traditional email fraud involved tricking you into changing payment accounts or sending money to new accounts; in the future, it might be tricking the agent’s evidence chain to accept new accounts or initiate payments automatically.\n\nTherefore, the main battlefield for anti-fraud will shift from content recognition to action governance: minimal permissions, layered authorization, default secondary confirmation, revocability, and traceability.\n\nYou’re dealing with an active subject—you can’t just detect; you must be able to “brake” at the action level.\n\nIdentity: from “who are you” to “who is acting for you”\n\nA fundamental question that confuses people about Moltbook is: who is actually speaking?\n\nIn business, it becomes: who is actually acting?\n\nBecause the executor is increasingly likely not to be you but your agent.\n\nSo, identity is no longer static accounts but dynamic bindings: is the agent yours? Has it been authorized? What’s the scope? Has it been replaced or tampered?\n\nI prefer a three-layer model:\n\nFirst layer: who is the person (account, device, KYC);\n\nSecond layer: who is the agent (instance, version, environment);\n\nThird layer: is the binding trustworthy (authorization chain, revocable, auditable).\n\nMany companies only handle the first layer, but in the agent era, the real incremental value is in the second and third layers: you must prove “this is truly that agent” and “it is indeed authorized to do this.” \n\nTrust: from “rating” to “performance logs”\n\nMany people dismiss reputation as虚 because internet ratings are too easy to fake.\n\nBut in agentic commerce, trust becomes concrete: agents place orders, pay, negotiate, return—why should merchants ship first? platforms advance funds? financial institutions give credit?\n\nThe essence of trust has always been: using history to constrain the future.\n\nIn the agent era, history looks more like “performance logs”: what permissions did it operate within in the past 90 days? How many secondary confirmations triggered? How many oversteps occurred? How many times was it revoked?\n\nOnce such “execution trust” is readable, it becomes a new collateral: higher credit limits, faster settlements, fewer deposits, lower risk control costs.\n\nA broader perspective: rebuilding the responsibility system of digital society\n\nFinally, stepping back, we see we’re reconstructing the responsibility system of digital society.\n\nNew entities have appeared: capable of acting, signing, paying, and modifying system configs, but they are not natural persons.\n\nHistorical experience shows that whenever new entities emerge in society, chaos precedes regulation. Corporate law, payment clearing, auditing systems—all fundamentally answer: who can do what? Who is responsible if something goes wrong?\n\nThe agent era forces us to revisit these questions:\n\nHow to prove agency relationships? Can authorizations be revoked? How to judge overreach? How to attribute losses? Who takes the blame?\n\nThese are the questions I hope you’ll genuinely consider after listening to this episode.\n\nAnd the push for self-hosting isn’t anti-cloud or sentimental; it’s about avoiding uncontrollability: as decision power becomes more critical, we naturally want to keep key parts within our controllable boundaries.\n\nMaking “authorization, revocation, auditing, responsibility chain” default platform and product capabilities\n\nTo conclude with one sentence:\n\nThe real value of the week’s chaos around OpenClaw and Moltbook isn’t to scare us about AI but to push us to seriously build the order of the “Action Internet.” \n\nIn the past, we discussed truth and falsehood mainly in content, which at most pollutes cognition.\n\nBut in the agent era, actions directly change accounts, permissions, and funds.\n\nSo the earlier we embed authorization, revocation, auditing, and responsibility chain as default platform and product features, the sooner we can safely delegate larger-value actions to agents, and humans can enjoy greater productivity dividends.\n\nThat’s all for today. Feel free to leave comments—we aim for genuine deep discussions between people. Thank you, see you next episode.