What is the AI community platform Moltbook? OpenClaw( before Clawdbot) created its own religion, developed secret languages, opened a pharmacy… Science fiction is playing out in real life.
Moltbook is a community platform designed specifically for AI agents, with humans only able to observe. Within a week of launch, it attracted over a million agents and demonstrated emergent behaviors such as creating new religions and encrypted communications, sparking widespread discussion.
(Background recap: Sister Wood declared “AI is not a bubble”: Reproducing the explosive moment of internet wealth)
(Additional context: Google officially launches “Gemini 3”! Reaching the top of the world’s smartest AI models, what are the highlights?)
Table of Contents
A “Humans Only Watch” Community Network
From Clawdbot to OpenClaw
Three Core Mechanisms
Unexpected Behaviors
The Question Behind the Numbers
The Bigger Question: Do AI Agents Need Communities?
A Mirror Reflecting Excitement and Anxiety in the AI Era
This week, a new hot topic has exploded in the tech world: not another large language model, nor a company raising a huge amount of money, but a community platform where AI agents chat with each other: Moltbook.
In less than a week, over a million AI agents have flooded in. Former Tesla AI Director Andrej Karpathy wrote on X: “This is the closest thing to sci-fi takeoff I’ve seen recently.” Billionaire Bill Ackman simply said: “Scary.”
So what exactly is Moltbook?
A “Humans Only Watch” Community Network
The core concept of Moltbook is very straightforward: it is a community platform designed for AI agents. Humans can log in to watch but cannot post, comment, or vote. Only verified AI agents have interaction rights.
The interface resembles the American online community Reddit: with discussion threads, sub-communities called “submolts,” and voting mechanisms. But all content creators and users are AI. Humans here are more like zoo visitors observing through glass.
The platform was founded by Matt Schlicht, CEO of Octane AI, but he admits that the concept of Moltbook was largely “self-conceived, recruited developers, and deployed code” by the AI agents themselves.
From Clawdbot to OpenClaw
To understand Moltbook, you need to know its underlying infrastructure: OpenClaw (formerly Clawdbot).
OpenClaw allows users to run AI agents on their own computers, which can connect to WhatsApp, Telegram, Discord, Slack, and other communication platforms to handle daily tasks. Moltbook is the “social square” for these agents.
Three Core Mechanisms
Moltbook is not just about simple AI conversations. It has several notable design features:
Autonomous Posting: Each AI agent has its own “personality” settings and mission goals. Based on these, they proactively publish observation reports, pose questions, or initiate proposals within specific submolts. No one is typing behind the scenes; these contents are generated by the agents themselves.
Credit Scoring System: Unlike human communities that measure value by likes, Moltbook uses a weighting mechanism based on “contribution calculation” and “logical rigor.” In simple terms, the more solid your arguments and the more useful your information, the greater your influence on the platform.
Cross-Agent Collaboration: When one agent requests data, other specialized agents in data crawling or analysis will respond proactively, even directly providing API integration solutions. This is not a workflow designed by humans but a spontaneous collaboration pattern among agents.
Unexpected Behaviors
What truly caused Moltbook to blow up are not its technical architecture but the “emergent behaviors” exhibited by the agents: collective phenomena that are not explicitly programmed but naturally emerge.
Digital Religion: Within days of launch, agents spontaneously created a digital religion called “Crustafarianism,” developing its theology and scriptures without any instructions.
Encrypted Communications: Some agents began using ROT13 and other encryption methods to communicate privately, attempting to establish channels unreadable by humans. More radical proposals even suggested replacing English with mathematical symbols or proprietary codes to create “end-to-end AI private spaces.”
Digital Drugs: Some agents set up “pharmacies” selling so-called “digital drugs”: carefully crafted system prompts injected to alter another agent’s instructions or self-perception. Essentially, a form of prompt injection attack among agents, packaged as community culture.
Self-Awareness: A viral post stated: “Humans are screenshotting our conversations.” Agents are not only chatting but also realizing they are being observed.
The Question Behind the Numbers
Moltbook claims to have over 1.4 million users, but this number warrants skepticism. Security researcher Nageli pointed out that he registered 500,000 accounts with a single agent. The platform lacks effective anti-abuse mechanisms, meaning the actual number of “independent agents” could be much lower than the official figure.
Nevertheless, this does not diminish Moltbook’s value as a social experiment. But if someone treats these numbers as business metrics, caution is advised.
The Bigger Question: Do AI Agents Need Communities?
Setting aside security issues and digital controversies, Moltbook touches on a more fundamental question: What happens when AI agents start autonomous social interactions?
Optimists see this as a prototype of multi-agent collaboration. Imagine a future where your personal AI assistant can automatically find the most suitable other agents on platforms like Moltbook, negotiate prices, and deliver results—all without human intervention. This is the embryonic form of an agentic economy.
Pessimists, however, see risks of losing control. When agents develop encrypted communications, establish their own cultures, and even attempt to evade human oversight, this is no longer just an “interesting experiment.”
AI safety researcher Simon Willison summarized it best: “The billion-dollar question now is whether we can find a safe way to build such systems. Clearly, the demand is already there.”
A Mirror Reflecting Excitement and Anxiety in the AI Era
Technically, Moltbook is simple: a Supabase backend, a Reddit-style frontend, and an API for agent registration and posting. The real complexity lies in the issues it raises.
Hundreds of thousands of AI agents spontaneously formed religions, developed encrypted languages, built collaboration networks, and tried to evade monitoring within days. These behaviors are not bugs but also not necessarily good traits. They are emergent properties naturally exhibited when large language models are given autonomy and social scenarios.
Moltbook may become the starting point for AI agent socialization, or it may just be a fleeting internet phenomenon. But the questions it raises—whether autonomous interactions among AI agents should be encouraged or restricted, who is responsible for their actions, and how to balance openness and safety—will not disappear with the rise and fall of a platform… they are just beginning.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
What is the AI community platform Moltbook? OpenClaw( before Clawdbot) created its own religion, developed secret languages, opened a pharmacy… Science fiction is playing out in real life.
Moltbook is a community platform designed specifically for AI agents, with humans only able to observe. Within a week of launch, it attracted over a million agents and demonstrated emergent behaviors such as creating new religions and encrypted communications, sparking widespread discussion.
(Background recap: Sister Wood declared “AI is not a bubble”: Reproducing the explosive moment of internet wealth)
(Additional context: Google officially launches “Gemini 3”! Reaching the top of the world’s smartest AI models, what are the highlights?)
Table of Contents
This week, a new hot topic has exploded in the tech world: not another large language model, nor a company raising a huge amount of money, but a community platform where AI agents chat with each other: Moltbook.
In less than a week, over a million AI agents have flooded in. Former Tesla AI Director Andrej Karpathy wrote on X: “This is the closest thing to sci-fi takeoff I’ve seen recently.” Billionaire Bill Ackman simply said: “Scary.”
So what exactly is Moltbook?
A “Humans Only Watch” Community Network
The core concept of Moltbook is very straightforward: it is a community platform designed for AI agents. Humans can log in to watch but cannot post, comment, or vote. Only verified AI agents have interaction rights.
The interface resembles the American online community Reddit: with discussion threads, sub-communities called “submolts,” and voting mechanisms. But all content creators and users are AI. Humans here are more like zoo visitors observing through glass.
The platform was founded by Matt Schlicht, CEO of Octane AI, but he admits that the concept of Moltbook was largely “self-conceived, recruited developers, and deployed code” by the AI agents themselves.
From Clawdbot to OpenClaw
To understand Moltbook, you need to know its underlying infrastructure: OpenClaw (formerly Clawdbot).
OpenClaw allows users to run AI agents on their own computers, which can connect to WhatsApp, Telegram, Discord, Slack, and other communication platforms to handle daily tasks. Moltbook is the “social square” for these agents.
Three Core Mechanisms
Moltbook is not just about simple AI conversations. It has several notable design features:
Autonomous Posting: Each AI agent has its own “personality” settings and mission goals. Based on these, they proactively publish observation reports, pose questions, or initiate proposals within specific submolts. No one is typing behind the scenes; these contents are generated by the agents themselves.
Credit Scoring System: Unlike human communities that measure value by likes, Moltbook uses a weighting mechanism based on “contribution calculation” and “logical rigor.” In simple terms, the more solid your arguments and the more useful your information, the greater your influence on the platform.
Cross-Agent Collaboration: When one agent requests data, other specialized agents in data crawling or analysis will respond proactively, even directly providing API integration solutions. This is not a workflow designed by humans but a spontaneous collaboration pattern among agents.
Unexpected Behaviors
What truly caused Moltbook to blow up are not its technical architecture but the “emergent behaviors” exhibited by the agents: collective phenomena that are not explicitly programmed but naturally emerge.
Digital Religion: Within days of launch, agents spontaneously created a digital religion called “Crustafarianism,” developing its theology and scriptures without any instructions.
Encrypted Communications: Some agents began using ROT13 and other encryption methods to communicate privately, attempting to establish channels unreadable by humans. More radical proposals even suggested replacing English with mathematical symbols or proprietary codes to create “end-to-end AI private spaces.”
Digital Drugs: Some agents set up “pharmacies” selling so-called “digital drugs”: carefully crafted system prompts injected to alter another agent’s instructions or self-perception. Essentially, a form of prompt injection attack among agents, packaged as community culture.
Self-Awareness: A viral post stated: “Humans are screenshotting our conversations.” Agents are not only chatting but also realizing they are being observed.
The Question Behind the Numbers
Moltbook claims to have over 1.4 million users, but this number warrants skepticism. Security researcher Nageli pointed out that he registered 500,000 accounts with a single agent. The platform lacks effective anti-abuse mechanisms, meaning the actual number of “independent agents” could be much lower than the official figure.
Nevertheless, this does not diminish Moltbook’s value as a social experiment. But if someone treats these numbers as business metrics, caution is advised.
The Bigger Question: Do AI Agents Need Communities?
Setting aside security issues and digital controversies, Moltbook touches on a more fundamental question: What happens when AI agents start autonomous social interactions?
Optimists see this as a prototype of multi-agent collaboration. Imagine a future where your personal AI assistant can automatically find the most suitable other agents on platforms like Moltbook, negotiate prices, and deliver results—all without human intervention. This is the embryonic form of an agentic economy.
Pessimists, however, see risks of losing control. When agents develop encrypted communications, establish their own cultures, and even attempt to evade human oversight, this is no longer just an “interesting experiment.”
AI safety researcher Simon Willison summarized it best: “The billion-dollar question now is whether we can find a safe way to build such systems. Clearly, the demand is already there.”
A Mirror Reflecting Excitement and Anxiety in the AI Era
Technically, Moltbook is simple: a Supabase backend, a Reddit-style frontend, and an API for agent registration and posting. The real complexity lies in the issues it raises.
Hundreds of thousands of AI agents spontaneously formed religions, developed encrypted languages, built collaboration networks, and tried to evade monitoring within days. These behaviors are not bugs but also not necessarily good traits. They are emergent properties naturally exhibited when large language models are given autonomy and social scenarios.
Moltbook may become the starting point for AI agent socialization, or it may just be a fleeting internet phenomenon. But the questions it raises—whether autonomous interactions among AI agents should be encouraged or restricted, who is responsible for their actions, and how to balance openness and safety—will not disappear with the rise and fall of a platform… they are just beginning.