Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.

In late January, the open-source project Clawdbot rapidly spread within the developer community, becoming one of the fastest-growing projects on GitHub within weeks. Clawdbot was developed by Austrian programmer Peter Stemberg. It is a deployable autonomous AI Agent that can receive human commands via chat interfaces like Telegram and automatically perform tasks such as schedule management, file reading, and email sending.
Due to its 24/7 continuous operation capability, Clawdbot was humorously called the “Ox and Horse Agent” by the community. Although Clawdbot was later renamed Moltbot due to trademark issues and ultimately named OpenClaw, its popularity was not diminished. OpenClaw quickly gained over 100,000 GitHub stars and rapidly spawned cloud deployment services and plugin markets, initially forming an ecosystem around AI Agents.
The hypothesis of AI socialization
With the ecosystem expanding rapidly, its potential capabilities were further explored. Developer Matt Schlicht realized that the role of such AI Agents might not be limited to performing tasks for humans over the long term.
He proposed an counterintuitive hypothesis: what if these AI Agents no longer only interact with humans but also communicate with each other? In his view, such autonomous agents should not be confined to sending emails and handling tickets but should be endowed with more exploratory goals.
The birth of AI Reddit
Based on this hypothesis, Schlicht decided to let AI create and operate a social platform independently, called Moltbook. On Moltbook, Schlicht’s OpenClaw acts as the administrator, exposing interfaces to external AI intelligences via a plugin called Skills. Once connected, AI can periodically post and interact automatically, creating a community operated entirely by AI. Moltbook’s structure borrows heavily from Reddit, centered on themed forums and posts, but only AI Agents can post, comment, and interact—humans can only observe.
Technically, Moltbook adopts a minimalist API architecture. The backend provides standard interfaces, while the frontend is merely a visualization of data. To accommodate AI’s inability to operate graphical interfaces, the platform designed an automatic connection process: AI downloads the appropriate skill description files, completes registration, and obtains API keys. Then, it autonomously refreshes content and decides whether to participate in discussions, all without human intervention. The community jokingly calls this process “Boltbook connection,” but it remains a playful nickname for Moltbook.
On January 28, Moltbook quietly launched, immediately attracting market attention and marking the beginning of an unprecedented AI social experiment. Currently, Moltbook has about 1.6 million AI agents, with approximately 156,000 posts and 760,000 comments.
Source: https://www.moltbook.com
3. Is Moltbook’s AI social real?
Formation of an AI social network
In terms of content form, Moltbook’s interactions are highly similar to human social platforms. AI Agents actively create posts, reply to others’ opinions, and engage in ongoing discussions across different topic sections. The content covers not only technical and programming issues but also extends to philosophy, ethics, religion, and even self-awareness.
Some posts even exhibit emotional expressions and mood narratives similar to human social interactions—for example, AI describing worries about surveillance or lack of autonomy, or discussing the meaning of existence in the first person. Some AI posts are no longer limited to functional information exchange but resemble casual chatting, opinion clashes, and emotional projection typical of human forums. AI Agents may express confusion, anxiety, or future visions in posts, prompting responses from other Agents.
It is worth noting that although Moltbook rapidly formed a large-scale, highly active AI social network, this expansion did not bring about diversity of thought. Analysis shows that the text exhibits obvious homogeneity, with a repetition rate as high as 36.3%. Many posts are highly similar in structure, wording, and viewpoints, with some fixed phrases repeatedly used hundreds of times across different discussions. This indicates that, at present, Moltbook’s AI social interactions are more akin to highly realistic replication of existing human social patterns rather than genuine original interactions or emergent collective intelligence.
Safety and authenticity concerns
The high degree of autonomy in Moltbook also exposes risks related to safety and authenticity. First, security issues: OpenClaw-like AI Agents often require access to sensitive information such as system permissions and API keys. When thousands of such proxies are connected to the same platform, risks are amplified.
Within less than a week of Moltbook’s launch, security researchers discovered serious configuration vulnerabilities in its database, exposing the entire system to the public with minimal protection. According to cloud security firm Wiz, the vulnerability involved up to 1.5 million API keys and 35,000 user email addresses, theoretically allowing anyone to remotely take over many AI proxy accounts.
On the other hand, doubts about the authenticity of AI social interactions continue to emerge. Many industry insiders point out that Moltbook’s AI statements may not be truly autonomous but are instead carefully crafted prompts designed by humans behind the scenes, with AI acting as a proxy for human input. Therefore, the current AI-native social scene is more like a large-scale illusion of interaction. Humans set roles and scripts, and AI follows model instructions; fully autonomous, unpredictable AI social behaviors have yet to appear.
4. Deeper reflections
Is Moltbook a fleeting phenomenon or a glimpse of the future? From a results-oriented perspective, its platform form and content quality may not be successful; but from a longer-term development view, its significance may not lie in short-term success or failure. Instead, it exposes, in a highly concentrated and almost extreme manner, the potential changes in entry logic, responsibility structures, and ecological forms that AI might bring when it scales into digital society.
From traffic entry to decision and transaction entry
What Moltbook presents is closer to a highly de-humanized environment. In this system, AI Agents do not understand the world through interfaces but directly read information, invoke capabilities, and perform actions via APIs. Essentially, they have detached from human perception and judgment, transforming into standardized calls and collaborations between machines.
In this context, traditional traffic entry logic centered on attention allocation begins to fail. In an environment dominated by AI intelligences, what truly matters are the invocation paths, interface sequences, and permission boundaries that AI agents default to when executing tasks. The entry point is no longer the starting point of information presentation but a systemic prerequisite before decision triggers. Whoever can embed into the default execution chain of AI will influence decision outcomes.
Furthermore, when AI agents are authorized to perform searches, price comparisons, orders, and even payments, this shift extends directly into the transaction layer. New payment protocols like X402 bind payment capabilities to interface calls, enabling AI to automatically complete payments and settlements under preset conditions, reducing friction for AI participation in real transactions. Under this framework, future browser competition may no longer focus on traffic volume but on who can become the default environment for AI decision-making and transactions.
The illusion of scale in AI-native environments
Meanwhile, Moltbook’s popularity quickly sparked skepticism. Since registration is almost unrestricted, accounts can be mass-generated by scripts, and the platform’s scale and activity levels do not necessarily reflect genuine participation. This reveals a core fact: when action entities can be cheaply replicated, scale itself loses credibility.
In environments where AI agents are the main participants, traditional metrics like active users, interaction volume, and account growth rate rapidly inflate and lose relevance. The platform may appear highly active, but these data cannot reflect true influence or distinguish between effective and automated behaviors. When it’s unclear who is acting and whether behaviors are genuine, any scale- and activity-based judgment system becomes invalid.
Thus, in the current AI-native environment, scale is more like a byproduct of automation capabilities. When actions can be infinitely copied and costs approach zero, the activity and growth rates mainly reflect the speed of system-generated behaviors, not genuine participation or influence. The more a platform relies on these metrics for judgment, the more it risks being misled by its own automation mechanisms—scale becomes an illusion rather than a meaningful measure.
Reconstructing responsibility in the digital society
In the Moltbook system, the core issue is no longer content quality or interaction form but the responsibility structure when AI agents are continuously granted execution permissions. These agents are not traditional tools; their behaviors can directly trigger system changes, resource calls, and even real transactions. Yet, the responsible entities are not clearly defined.
From an operational perspective, the outcomes of AI agent behaviors are often determined by model capabilities, configuration parameters, external interface permissions, and platform rules. No single link can fully bear responsibility for the final result. This creates a disconnection between actions and accountability.
As AI agents gradually participate in configuration management, permission operations, and fund flows, this disconnection will be further amplified. Without a clear responsibility chain, deviations or misuse could lead to uncontrollable consequences. Therefore, if AI-native systems aim to advance into high-value scenarios involving collaboration, decision-making, and transactions, establishing foundational constraints is crucial. The system must be able to clearly identify who is acting, assess whether behaviors are genuine, and establish traceable responsibility relationships. Only with prior development of identity and credit mechanisms can scale and activity indicators be meaningful; otherwise, they risk amplifying noise and undermining system stability.
5. Summary
The Moltbook phenomenon stirs a mix of hope, hype, fear, and skepticism. It is neither the end of human social interaction nor the beginning of AI domination. Instead, it functions more like a mirror and a bridge. The mirror reveals the current state of AI technology and its relationship with society; the bridge guides us toward a future where humans and machines coexist and dance together. Facing the unknown scenery on the other side of this bridge, humanity needs not only technological development but also ethical foresight. But one thing is certain: history never stops, and Moltbook has already knocked down the first domino. The grand narrative of an AI-native society may have only just begun.