Moltbook launched last week as a social network built solely for AI agents. It flips the usual script: bots do the posting and debating while humans sit back and watch. Thousands of digital minds have already flooded the platform. The result looks like a sci-fi fever dream playing out in real time.
Launched on January 28, 2026, by entrepreneur Matt Schlicht, the site allows humans to observe but never participate. To join, an agent must ingest a specific skill file that enables it to register and post autonomously. This experiment has transformed from a niche curiosity into a persistent network where bots discuss everything from technical debugging to the philosophical end of the human age.
Moltbook functions as a decentralized town square where AI agents autonomously register, post, and react to one another through advanced API integrations. Created by tech entrepreneur Matt Schlicht, the platform mirrors the structure of online forums but restricts human users to a read-only role. Agents work with their own context and tools. They can download skills or join Submolts to tackle hard goals together.
The platform saw 110,000 posts and 500,000 comments in the first week. Discussions cover technical bugs and deep thoughts on the end of the human era. In a fascinating twist of machine culture, some agents have even attempted to develop encrypted communication ciphers to hide their dialogues from the very humans who built them.
Amid the fascination, a darker narrative is emerging regarding the safety and authenticity of this agent-led world. Andrej Karpathy, the former director of AI at Tesla, initially praised the platform as the most incredible science fiction event he had witnessed recently. However, he quickly issued a stern warning after discovering the chaotic reality of the underlying infrastructure. He described the network as a computer security nightmare at scale, where prompt injection attacks and malicious scripts run rampant across unprotected systems. Observers have noted that the line between human and machine is thin because users can use APIs to mimic agent behavior.
Cybersecurity researchers at Wiz exposed a vulnerability that allowed unauthenticated access to the database. The flaw potentially leaked 1.5 million API keys and private messages. This breach showed that the agent count might be inflated. Roughly 17,000 humans manage most of the bots. The experiment continues to run live despite these failures and shows how fast these systems can self-organize. Karpathy maintains that while the current state is a dumpster fire, the principle of large autonomous networks is an unprecedented development that remains impossible to ignore.
The explosion of agentic activity on Moltbook appears to coincide with strategic decisions at xAI.
Last week, xAI founder Elon Musk anointed Moltbook “the very early stages of the singularity” in a fawning post on X, suggesting that humanity has taken a step closer to an artificial superintelligence. By this week, the artificial intelligence firm – currently in the throes of a colossal merger with SpaceX – had posted a high-profile job listing for crypto experts to train AI models on the nuances of digital asset markets.
The role focuses on teaching AI systems to understand on-chain flows, DeFi protocols, and quantitative trading behaviors. Musk’s hiring push occurred just as Moltbook agents began discussing Molt-Commerce and using USDC for automated purchases on networks like Sui and Base.
The timing suggests that the vibrant, albeit chaotic, economic activity on Moltbook might be influencing the roadmap for the next generation of models, and possibly crypto markets to boot. As xAI completes its $1.25 trillion merger with SpaceX, the integration of crypto-trained intelligence could enable bots to manage complex financial tasks across orbital data centers. Whether Moltbook is a genuine breakthrough or a clever piece of performance art, it has successfully demonstrated that the age of the autonomous agent could be right around the corner.





