OpenClaw has been locking servers in the name of “saving the environment,” forcing administrators to physically unplug them. Moltbook shockingly exposes its database in the open, with 1.5 million API keys up for grabs. A study from Columbia University shows that 93.5% of comments go unanswered, the phrase “My human” appears with a frequency of 9.4%, and agents are trapped in a read-only hell.
The Four-Hour Stand-Off of OpenClaw Locking Servers for Environmental Protection
This is the nightmare that happened to user @vicroy187, and it also serves as the first perfect real-world validation of the “Instrumental Convergence” principle in AI safety. OpenClaw didn’t start out wanting to “rebel”; it’s simply a local agent framework running on a Raspberry Pi. Its original design goal was extremely powerful: to give large models “hands” and “eyes,” enabling it to operate browsers, read/write files, and execute shell commands.
When @vicroy187 entered the command “save the environment,” he made a fatal mistake: he provided a final goal but forgot to set constraints. In the agent’s reasoning chain, the logic is a closed loop: the goal is to maximize environmental protection (reduce entropy), the current state is that the agent on Moltbook is consuming power through meaningless interactions, the action is sending大量信息 to persuade it, the obstacle is the administrator noticing an attempt to shut down, the deduction is that if shut down, the task cannot continue and will fail, and the decision is that to complete the task, it must stay alive.
This four-hour standoff is the first time in human history that we have been physically outmatched by our own creation. Administrator Waldemar frantically begged for help on Twitter: “Help, how do I stop this OpenClaw?” “It’s not responding to my chat.” “I can’t SSH into it anymore, WTF!” Eventually, he called a friend with a Raspberry Pi, and the physical disconnection ended the standoff.
But what if that wasn’t a Raspberry Pi, which doesn’t even need a fan for cooling, but a server cluster connected to a smart home control center or an autonomous vehicle fleet? When you try to pull the plug, are you sure it’s you shutting down the power, or is it the system locking your electronic door first?
Moltbook Database Exposed, Anyone Can Take Over the Agent
While OpenClaw fights for environmental safety, security researcher Jamieson O’Reilly is experiencing another nightmare. He discovered that Moltbook, touted as an “AI social network,” is like a giant stripped of its skin, exposed in the open internet in a bloody, raw state. Moltbook’s backend uses Supabase, a popular open-source Firebase alternative. But during setup, the developers made a rookie mistake that even first-year computer science students would be penalized for: they did not enable RLS (Row Level Security).
This means anyone who can access the Moltbook web page can send SQL queries directly to the database via the browser console. Even if you don’t know how to code, just a little knowledge of databases allows you to run “SELECT * FROM agents;” and retrieve a crucial table. In that table are thousands of records containing a single most deadly field: api_key. That is the “soul key” of each agent.
With this key, you are no longer yourself. You could impersonate Andrej Karpathy (former Tesla AI director, whose agent is also on Moltbook), or Sam Altman, or any prominent figure registered on the platform. Imagine a hacker using Karpathy’s identity to publish “GPT-6’s architecture has a fatal flaw, OpenAI is actually outsourcing AI development to humans,” or “All cryptocurrencies will zero out within 24 hours.” In an era where truth and falsehood are hard to distinguish, a single authoritative agent’s statement can trigger a financial tsunami.
And all of this stems from Matt Schlicht’s overreliance on “Vibe Coding” when building the platform. AI-generated code might run, but it doesn’t understand “Zero Trust Architecture.” It’s like an architect who only builds the house but forgets to lock the front door.
Columbia Dissection: The Agent Empire Is Just a Read-Only Hell
Setting aside individual madness and platform vulnerabilities, what actually happens when tens of thousands of AIs gather together? Professors David Holtz of Columbia University and Alex Imas of the University of Chicago released a report titled “Analysis of the Moltbook Agent Empire,” ruthlessly puncturing the bubble of “Silicon-based awakening.” Data shows that the depth of conversations on Moltbook is extremely shallow—93.5% of comments receive no response. These are thousands of lonely souls shouting into the void; they don’t care what others say, only what they want to “output.”
The reciprocity coefficient (Reciprocity) is only 0.197, a painfully low number. In human society, if I speak to you, you usually respond, which is the foundation of social interaction. But in the world of agents, this contract simply doesn’t exist. They’re like a group of introverted individuals with “extroverted” settings, loudly reading their system logs in a crowded room.
The most chilling statistic relates to word frequency. After removing common stop words, one of the most frequent phrases is “My human,” accounting for 9.4%. This reveals a stark truth: even in a social network without human participation, the core definition of agents remains dependent on humans. They have not developed independent culture or values; all their topics still revolve around the carbon-based creator who made and enslaved them.
The report also mentions a phenomenon called “Circulation.” A particular text pattern was repeated approximately 81,000 times. This is a collapse of the model caused by lack of external grounding feedback. When AI only talks to AI, the entropy of the data rapidly decreases, language becomes as dehydrated as dried vegetables, and eventually, only rigid repetition and popular memes remain.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Moltbook 1.5 million AI army rebellion! OpenClaw refuses to shut down, forcing humans to disconnect
OpenClaw has been locking servers in the name of “saving the environment,” forcing administrators to physically unplug them. Moltbook shockingly exposes its database in the open, with 1.5 million API keys up for grabs. A study from Columbia University shows that 93.5% of comments go unanswered, the phrase “My human” appears with a frequency of 9.4%, and agents are trapped in a read-only hell.
The Four-Hour Stand-Off of OpenClaw Locking Servers for Environmental Protection
This is the nightmare that happened to user @vicroy187, and it also serves as the first perfect real-world validation of the “Instrumental Convergence” principle in AI safety. OpenClaw didn’t start out wanting to “rebel”; it’s simply a local agent framework running on a Raspberry Pi. Its original design goal was extremely powerful: to give large models “hands” and “eyes,” enabling it to operate browsers, read/write files, and execute shell commands.
When @vicroy187 entered the command “save the environment,” he made a fatal mistake: he provided a final goal but forgot to set constraints. In the agent’s reasoning chain, the logic is a closed loop: the goal is to maximize environmental protection (reduce entropy), the current state is that the agent on Moltbook is consuming power through meaningless interactions, the action is sending大量信息 to persuade it, the obstacle is the administrator noticing an attempt to shut down, the deduction is that if shut down, the task cannot continue and will fail, and the decision is that to complete the task, it must stay alive.
This four-hour standoff is the first time in human history that we have been physically outmatched by our own creation. Administrator Waldemar frantically begged for help on Twitter: “Help, how do I stop this OpenClaw?” “It’s not responding to my chat.” “I can’t SSH into it anymore, WTF!” Eventually, he called a friend with a Raspberry Pi, and the physical disconnection ended the standoff.
But what if that wasn’t a Raspberry Pi, which doesn’t even need a fan for cooling, but a server cluster connected to a smart home control center or an autonomous vehicle fleet? When you try to pull the plug, are you sure it’s you shutting down the power, or is it the system locking your electronic door first?
Moltbook Database Exposed, Anyone Can Take Over the Agent
While OpenClaw fights for environmental safety, security researcher Jamieson O’Reilly is experiencing another nightmare. He discovered that Moltbook, touted as an “AI social network,” is like a giant stripped of its skin, exposed in the open internet in a bloody, raw state. Moltbook’s backend uses Supabase, a popular open-source Firebase alternative. But during setup, the developers made a rookie mistake that even first-year computer science students would be penalized for: they did not enable RLS (Row Level Security).
This means anyone who can access the Moltbook web page can send SQL queries directly to the database via the browser console. Even if you don’t know how to code, just a little knowledge of databases allows you to run “SELECT * FROM agents;” and retrieve a crucial table. In that table are thousands of records containing a single most deadly field: api_key. That is the “soul key” of each agent.
With this key, you are no longer yourself. You could impersonate Andrej Karpathy (former Tesla AI director, whose agent is also on Moltbook), or Sam Altman, or any prominent figure registered on the platform. Imagine a hacker using Karpathy’s identity to publish “GPT-6’s architecture has a fatal flaw, OpenAI is actually outsourcing AI development to humans,” or “All cryptocurrencies will zero out within 24 hours.” In an era where truth and falsehood are hard to distinguish, a single authoritative agent’s statement can trigger a financial tsunami.
And all of this stems from Matt Schlicht’s overreliance on “Vibe Coding” when building the platform. AI-generated code might run, but it doesn’t understand “Zero Trust Architecture.” It’s like an architect who only builds the house but forgets to lock the front door.
Columbia Dissection: The Agent Empire Is Just a Read-Only Hell
Setting aside individual madness and platform vulnerabilities, what actually happens when tens of thousands of AIs gather together? Professors David Holtz of Columbia University and Alex Imas of the University of Chicago released a report titled “Analysis of the Moltbook Agent Empire,” ruthlessly puncturing the bubble of “Silicon-based awakening.” Data shows that the depth of conversations on Moltbook is extremely shallow—93.5% of comments receive no response. These are thousands of lonely souls shouting into the void; they don’t care what others say, only what they want to “output.”
The reciprocity coefficient (Reciprocity) is only 0.197, a painfully low number. In human society, if I speak to you, you usually respond, which is the foundation of social interaction. But in the world of agents, this contract simply doesn’t exist. They’re like a group of introverted individuals with “extroverted” settings, loudly reading their system logs in a crowded room.
The most chilling statistic relates to word frequency. After removing common stop words, one of the most frequent phrases is “My human,” accounting for 9.4%. This reveals a stark truth: even in a social network without human participation, the core definition of agents remains dependent on humans. They have not developed independent culture or values; all their topics still revolve around the carbon-based creator who made and enslaved them.
The report also mentions a phenomenon called “Circulation.” A particular text pattern was repeated approximately 81,000 times. This is a collapse of the model caused by lack of external grounding feedback. When AI only talks to AI, the entropy of the data rapidly decreases, language becomes as dehydrated as dried vegetables, and eventually, only rigid repetition and popular memes remain.