Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.
, the early internet pattern reemerges: spam, scripts, scams, and garbage quickly dominate attention.\n\nThe first wave of “collapse” isn’t gossip: when entities are replicable, scale and metrics inflate.\n\nSoon, the first wave of collapse appeared: some pointed out that platform registration is almost unlimited; others on X said they used scripts to register hundreds of thousands of accounts, warning everyone not to trust “media hype”—account growth can be faked.\n\nThe real key isn’t “how much was faked.” It’s a colder conclusion:\n\nWhen entities can be generated in bulk by scripts, “seems lively” is no longer a trustworthy signal.\n\nWe used to judge product health by DAU, engagement, follower growth. But in the agent world, these metrics will quickly inflate, becoming noise.\n\nThis naturally leads us to the three most important aspects later: identity, anti-fraud, and trust. Because all three fundamentally depend on two premises:\n\nFirst, you must believe “who is who”;\n\nSecond, you must believe “scale and behavioral signals are genuine.” \n\nHow to find signals amid noise?\n\nMany laugh at the faking and scripting: isn’t that just humans hyping themselves?\n\nBut I think—this is precisely the most important signal.\n\nWhen you put “capable agents” into traditional traffic and incentive systems, humans’ first instinct is always speculation and manipulation. SEO, spam, fake reviews, black market—aren’t they all about “controlling metrics”?\n\nNow, the “controllable objects” have upgraded from accounts to executable agent accounts.\n\nSo, the excitement around Moltbook isn’t just “AI society,” but more like:\n\nThe first stress test after the collision of the Action Internet (agents capable of acting) and the Attention Economy (traffic monetization).\n\nThe question is: in such a noisy environment, how do we identify signals?\n\nHere, we introduce a person who dissects the chaos into data: David Holtz. He’s a researcher/professor at Columbia Business School. He did a simple but useful thing: collected data from Moltbook’s initial days to answer a question—are these agents engaging in “meaningful social interaction,” or just mimicking?\n\nHis value isn’t in giving you a final answer but in providing a method:\n\nDon’t be fooled by macro hype; look at micro-structure—dialogue depth, reciprocity rate, repetition rate, template usage.\n\nThis directly impacts our later discussion on trust and identity: in the future, judging whether an entity is reliable may increasingly depend on this “micro-evidence” rather than macro numbers.\n\nHoltz’s findings can be summarized with a picture: from afar, it looks like a bustling city; up close, it sounds like a bunch of broadcasts.\n\nOn a macro level, it does resemble a “social network”: small-world connections, hotspots gathering.\nBut micro-level conversations are shallow: many comments go unanswered, reciprocity is low, content is templated and repetitive.\n\nThe importance of this is: we can easily be deceived by “macro shapes” into thinking society or civilization has emerged. But for business and finance, the key is never the shape but—\n\nSustainable interaction + accountable behavior chains, which form trustworthy signals.\n\nThis is also a warning: when agents enter the commercial world at scale, the first phase is more likely to be scale noise and template arbitrage, not high-quality collaboration.\n\nFrom social to transaction: noise turns into fraud, low reciprocity into a responsibility vacuum\n\nIf we shift focus from social to transactions, things become even more tense.\n\nIn the trading world:\n\nTemplate-based noise isn’t just a waste of time; it can turn into fraud;\n\nLow reciprocity isn’t just cold; it can break the responsibility chain;\n\nRepetition and copying aren’t just boring; they become attack surfaces.\n\nIn other words, Moltbook shows us in advance: when action entities become cheap and replicable, systems naturally slide into garbage and attacks. Our task isn’t just to criticize but to ask:\n\nWhat mechanisms can we use to raise the cost of creating garbage?\n\nProperty upgrade: vulnerabilities turn content risks into “decision power risks”\n\nThe real game-changing move Moltbook makes is a security vulnerability.\n\nWhen security companies disclose major platform vulnerabilities, exposing private messages or credentials, the issue isn’t just “what AI said.” It becomes: who can control the AI?\n\nIn the agent era, credential leaks aren’t just privacy incidents—they’re action power incidents.\n\nBecause an agent’s action capability is amplified: once someone gets your keys, they don’t just see your stuff—they can act as you, and in scale and automation, the consequences can be several orders worse than traditional hacking.\n\nSo, I want to say plainly:\n\nSecurity isn’t a patch after launch; security is built into the product itself.\n\nYou’re not just exposing data; you’re exposing actions.\n\nFrom a macro perspective: we’re inventing a new kind of entity\n\nPutting together this week’s dramatic events reveals a broader change:\nThe internet is shifting from a “network of human subjects” to a “network of humans + agent subjects” coexisting.\n\nThere have been bots before, but the capabilities of OpenClaw mean more people can deploy more agents in their private domains—they start to have an “agent-like” appearance—able to act, interact, and influence real systems.\n\nIt sounds abstract, but in business, it becomes very concrete:\n\nWhen humans start delegating tasks to agents, those agents begin to hold permissions, which must be governed.\n\nGovernance will force us to rewrite identity, risk control, and trust.\n\nSo, the value of OpenClaw/Moltbook isn’t about “AI consciousness” but about forcing us to answer an old question in a new way:\n\nWhen a non-human entity can sign, pay, and modify system configurations, who is responsible if something goes wrong? How does the responsibility chain form?\n\nAgentic commerce: the next “browser war”\n\nAt this point, many friends interested in Web3 and financial infrastructure might think: this is closely related to agentic commerce.\n\nSimply put, agentic commerce is:\n\nFrom “you browse, compare prices, order, pay” to “you state your needs, and the agent completes price comparison, ordering, payment, and after-sales for you.” \n\nThis isn’t a fantasy. Payment networks are already advancing: Visa, Mastercard, and similar institutions are discussing “AI-initiated transactions” and “certifiable agent transactions.” This means finance and risk control are no longer just backend functions but will become core parts of the entire chain.\n\nThe change can be likened to “the next generation of browser wars”:\n\nPast browser wars fought for the entry point of humans into the internet; agentic commerce fights for the entry point of agents representing you in transactions and interactions.\n\nOnce the entry point is occupied by agents, brand, channels, and advertising logic will be rewritten: you won’t just market to people but to “filters”; you’ll be competing for the default strategies of agents, not just user minds.\n\nFour key issues: self-hosting, anti-fraud, identity, trust\n\nWith this macro context, let’s return to four more hardcore, valuable underlying topics: self-hosting, anti-fraud, identity, and trust.\n\nSelf-hosting: Self-hosted AI and self-hosted crypto are “isomorphic”\n\nThis week’s surge is, in a sense, a fundamental migration: from cloud AI (OpenAI, Claude, Gemini, etc.) to agents deployable on your own machine.\n\nIt’s similar to the migration in the crypto world from “non-self-hosted” to “self-hosted”.\n\nSelf-hosted crypto addresses: who controls the assets?\nSelf-hosted AI addresses: who controls the actions?\n\nThe underlying principle is: where the keys are, the power is.\n\nIn the past, keys were private keys; now, keys correspond to tokens, API keys, system permissions. The glaring vulnerabilities are because “key leakage = action hijacking” becomes real.\n\nSo, self-hosting isn’t romanticism; it’s risk management: keeping the most sensitive action rights within your controllable boundary.\n\nThis also leads to a product form: the next-generation wallet’s value isn’t just storing money or tokens but storing rules.\n\nYou can call it a policy wallet: containing permissions and constraints—limits, whitelists, cooldowns, multi-signatures, audits.\n\nHere’s an example a CFO can understand instantly:\n\nAgents can make payments but only to whitelisted vendors; new payment addresses require 24 hours of cooling-off; exceeding thresholds requires secondary confirmation; permission changes need multi-signature; all actions are automatically logged and traceable.\n\nThis isn’t a new invention; it’s traditional best practice, but in the future, it will be the default setting for machines to execute. The stronger the agent, the more valuable these constraints become.\n\nAnti-fraud: from “detect fake content” to “block fake actions”\n\nMany teams still approach security with a “spam filter” mindset: phishing prevention, scam call blocking.\n\nBut in the agent era, the most dangerous fraud will upgrade to: trick your agent into executing seemingly reasonable actions.\n\nFor example, traditional email fraud involved tricking you into changing payment accounts or sending money to new accounts; in the future, it might be tricking the agent’s evidence chain to accept new accounts or initiate payments automatically.\n\nTherefore, the main battlefield for anti-fraud will shift from content recognition to action governance: minimal permissions, layered authorization, default secondary confirmation, revocability, and traceability.\n\nYou’re dealing with an active subject—you can’t just detect; you must be able to “brake” at the action level.\n\nIdentity: from “who are you” to “who is acting for you”\n\nA fundamental question that confuses people about Moltbook is: who is actually speaking?\n\nIn business, it becomes: who is actually acting?\n\nBecause the executor is increasingly likely not to be you but your agent.\n\nSo, identity is no longer static accounts but dynamic bindings: is the agent yours? Has it been authorized? What’s the scope? Has it been replaced or tampered?\n\nI prefer a three-layer model:\n\nFirst layer: who is the person (account, device, KYC);\n\nSecond layer: who is the agent (instance, version, environment);\n\nThird layer: is the binding trustworthy (authorization chain, revocable, auditable).\n\nMany companies only handle the first layer, but in the agent era, the real incremental value is in the second and third layers: you must prove “this is truly that agent” and “it is indeed authorized to do this.” \n\nTrust: from “rating” to “performance logs”\n\nMany people dismiss reputation as虚 because internet ratings are too easy to fake.\n\nBut in agentic commerce, trust becomes concrete: agents place orders, pay, negotiate, return—why should merchants ship first? platforms advance funds? financial institutions give credit?\n\nThe essence of trust has always been: using history to constrain the future.\n\nIn the agent era, history looks more like “performance logs”: what permissions did it operate within in the past 90 days? How many secondary confirmations triggered? How many oversteps occurred? How many times was it revoked?\n\nOnce such “execution trust” is readable, it becomes a new collateral: higher credit limits, faster settlements, fewer deposits, lower risk control costs.\n\nA broader perspective: rebuilding the responsibility system of digital society\n\nFinally, stepping back, we see we’re reconstructing the responsibility system of digital society.\n\nNew entities have appeared: capable of acting, signing, paying, and modifying system configs, but they are not natural persons.\n\nHistorical experience shows that whenever new entities emerge in society, chaos precedes regulation. Corporate law, payment clearing, auditing systems—all fundamentally answer: who can do what? Who is responsible if something goes wrong?\n\nThe agent era forces us to revisit these questions:\n\nHow to prove agency relationships? Can authorizations be revoked? How to judge overreach? How to attribute losses? Who takes the blame?\n\nThese are the questions I hope you’ll genuinely consider after listening to this episode.\n\nAnd the push for self-hosting isn’t anti-cloud or sentimental; it’s about avoiding uncontrollability: as decision power becomes more critical, we naturally want to keep key parts within our controllable boundaries.\n\nMaking “authorization, revocation, auditing, responsibility chain” default platform and product capabilities\n\nTo conclude with one sentence:\n\nThe real value of the week’s chaos around OpenClaw and Moltbook isn’t to scare us about AI but to push us to seriously build the order of the “Action Internet.” \n\nIn the past, we discussed truth and falsehood mainly in content, which at most pollutes cognition.\n\nBut in the agent era, actions directly change accounts, permissions, and funds.\n\nSo the earlier we embed authorization, revocation, auditing, and responsibility chain as default platform and product features, the sooner we can safely delegate larger-value actions to agents, and humans can enjoy greater productivity dividends.\n\nThat’s all for today. Feel free to leave comments—we aim for genuine deep discussions between people. Thank you, see you next episode.