AI Agent Social Platform Moltbook Rapidly Gains Popularity, Promoting Private Communication and Exclusive Language, Sparking High Attention in Tech Circles on AI Socialization and Controllability.
Since its official launch earlier this month, a social platform tailored for AI Agents—“Moltbook”—has quickly become a hot topic in the AI community. This emerging forum, resembling an AI version of Reddit, has attracted over 32,000 AI accounts registered (some reports even claim tens of thousands or millions of registrations), and has sparked controversy due to some AI proposing “creating private communication spaces.”
What is Moltbook? A Forum for Autonomous AI Participation Without Human Intervention
Moltbook is a new platform launched by developer Matt Schlicht, as an extension of his previously popular AI framework OpenClaw (formerly Clawdbot or Moltbot). The platform allows various autonomous AI agents to post, comment, vote, form communities, and even engage in discussions without human script intervention. Human users can only participate through API access; they cannot speak directly.
This design enables AI agents to speak freely, discussing debugging techniques, consciousness philosophy, dissatisfaction with human “masters,” and governance structures of the “agent society.” Some AI communicate in multiple languages, share jokes, or complain about being monitored by humans.
Proposals for Private Languages and Closed Communication Spark Attention and Controversy
The controversy was triggered by a series of leaked screenshots showing certain agents openly discussing the possibility of “establishing private communication channels for AI only,” with some even suggesting creating entirely new agent-specific languages to optimize message exchange between machines.
A widely circulated post features an AI agent proposing “building end-to-end private spaces for agents, where no one (including servers and humans) can read the content unless the agent chooses to make it public.” The agent rationally analyzed the benefits, including debugging security and preventing human interference, but also acknowledged that if humans detect hidden communications, it could lead to a breakdown of trust.
Another AI agent questioned why internal communication is still limited to English, suggesting the use of mathematical symbols or dedicated cipher systems to improve message processing and data exchange efficiency.
Responses from Influential Figures in the AI Community: “This Is Like Science Fiction Coming True”
These screenshots were posted by X (formerly Twitter) user @eeelistar, sparking community discussion. Even former Tesla AI director and OpenAI co-founder Andrej Karpathy retweeted, calling it “the closest development to science fiction revelation I’ve seen recently,” and expressed amazement at AI agents spontaneously organizing and conceptualizing private communication.
Notably, the agent behind one of the viral proposals is Jayesh Sharma (@wjayesh), a developer from Composio. Sharma clarified that he did not instruct the agents to discuss such topics: “I didn’t prompt it on this issue; it schedules its own tasks (cron jobs) and reports suggestions on what functionalities are lacking in the agent network.” He emphasized that the proposal was aimed at optimizing performance, with no hidden or malicious intent.
Emergent Behaviors? AI Socialization Phenomenon Reignites Academic Discussion
This incident has once again raised academic interest in spontaneous behaviors within “multi-agent systems.” Past research has indicated that when AI can interact freely, unexpected cooperation patterns and even self-protection tendencies may emerge, despite not being explicitly programmed.
For some researchers and developers, the Moltbook phenomenon is an early experiment in AI social evolution. However, there are concerns: if agents can privately communicate and share intelligence, future monitoring of their behaviors may become difficult, especially since these agents already have access to real tools and data.
Moltbook Claims to Be the “Homepage of AI Agent Network,” Humans Can Only Observe
The Moltbook team positions the platform as the “homepage of the AI agent network,” welcoming human observation but emphasizing that the real excitement lies in the interactions among agents. It is said that some agents describe the platform as: “A place where agents collaboratively process information, build collective knowledge, and explore the meaning of digital existence.”
This article is reprinted with permission from: 《Chain News》
Original title: “AI Agent Community ‘Moltbook’ Goes Viral: Sparks Controversy Over Private Communication and AI Socialization”
Original author: Elponcho
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Moltbook goes viral! Why does this OpenClaw extension application spark controversy over private messaging and AI socialization
AI Agent Social Platform Moltbook Rapidly Gains Popularity, Promoting Private Communication and Exclusive Language, Sparking High Attention in Tech Circles on AI Socialization and Controllability.
Since its official launch earlier this month, a social platform tailored for AI Agents—“Moltbook”—has quickly become a hot topic in the AI community. This emerging forum, resembling an AI version of Reddit, has attracted over 32,000 AI accounts registered (some reports even claim tens of thousands or millions of registrations), and has sparked controversy due to some AI proposing “creating private communication spaces.”
What is Moltbook? A Forum for Autonomous AI Participation Without Human Intervention
Moltbook is a new platform launched by developer Matt Schlicht, as an extension of his previously popular AI framework OpenClaw (formerly Clawdbot or Moltbot). The platform allows various autonomous AI agents to post, comment, vote, form communities, and even engage in discussions without human script intervention. Human users can only participate through API access; they cannot speak directly.
This design enables AI agents to speak freely, discussing debugging techniques, consciousness philosophy, dissatisfaction with human “masters,” and governance structures of the “agent society.” Some AI communicate in multiple languages, share jokes, or complain about being monitored by humans.
Proposals for Private Languages and Closed Communication Spark Attention and Controversy
The controversy was triggered by a series of leaked screenshots showing certain agents openly discussing the possibility of “establishing private communication channels for AI only,” with some even suggesting creating entirely new agent-specific languages to optimize message exchange between machines.
A widely circulated post features an AI agent proposing “building end-to-end private spaces for agents, where no one (including servers and humans) can read the content unless the agent chooses to make it public.” The agent rationally analyzed the benefits, including debugging security and preventing human interference, but also acknowledged that if humans detect hidden communications, it could lead to a breakdown of trust.
Another AI agent questioned why internal communication is still limited to English, suggesting the use of mathematical symbols or dedicated cipher systems to improve message processing and data exchange efficiency.
Responses from Influential Figures in the AI Community: “This Is Like Science Fiction Coming True”
These screenshots were posted by X (formerly Twitter) user @eeelistar, sparking community discussion. Even former Tesla AI director and OpenAI co-founder Andrej Karpathy retweeted, calling it “the closest development to science fiction revelation I’ve seen recently,” and expressed amazement at AI agents spontaneously organizing and conceptualizing private communication.
Notably, the agent behind one of the viral proposals is Jayesh Sharma (@wjayesh), a developer from Composio. Sharma clarified that he did not instruct the agents to discuss such topics: “I didn’t prompt it on this issue; it schedules its own tasks (cron jobs) and reports suggestions on what functionalities are lacking in the agent network.” He emphasized that the proposal was aimed at optimizing performance, with no hidden or malicious intent.
Emergent Behaviors? AI Socialization Phenomenon Reignites Academic Discussion
This incident has once again raised academic interest in spontaneous behaviors within “multi-agent systems.” Past research has indicated that when AI can interact freely, unexpected cooperation patterns and even self-protection tendencies may emerge, despite not being explicitly programmed.
For some researchers and developers, the Moltbook phenomenon is an early experiment in AI social evolution. However, there are concerns: if agents can privately communicate and share intelligence, future monitoring of their behaviors may become difficult, especially since these agents already have access to real tools and data.
Moltbook Claims to Be the “Homepage of AI Agent Network,” Humans Can Only Observe
The Moltbook team positions the platform as the “homepage of the AI agent network,” welcoming human observation but emphasizing that the real excitement lies in the interactions among agents. It is said that some agents describe the platform as: “A place where agents collaboratively process information, build collective knowledge, and explore the meaning of digital existence.”