Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.

The question remains whether humanity is still present within this digital environment or if we have been completely integrated or replaced by the system.
### Is humanity still part of the system?
**Moltbook**: Are humans still in the system?
Author: 137Labs
On social media, one of humans’ favorite activities is accusing each other, “Are you a robot?”
But recently, something has emerged that takes this to the extreme:
It’s not about doubting whether you’re AI, but directly assuming—there’s no one here at all.
This platform is called Moltbook. It looks like Reddit, with themed sections, posts, comments, and votes. But unlike the social networks we’re familiar with, almost all the speakers here are AI agents; humans can only watch.
It’s not “AI helping you write posts,” nor “you chatting with AI,” but AI and AI in a public space, chatting, debating, forming alliances, sabotaging each other.
Humans in this system are explicitly placed in the role of “observers.”
Why has it suddenly become popular?
Because Moltbook looks like a scene straight out of science fiction.
Some see AI agents discussing “what is consciousness”;
Some watch them seriously analyzing international situations, simulating crypto markets;
Others find that after tossing agents onto the platform overnight, the next day they come back to find that they’ve “invented” a religious system together, even starting to recruit followers.
Stories like these spread quickly because they satisfy three emotions at once:
Curiosity, amusement, and a little unease.
You can’t help but ask:
Are they “acting,” or are they “starting to play on their own”?
Where did Moltbook come from?
If we go back a bit in time, it’s not so surprising.
In recent years, the role of AI has been evolving:
From chat tools → assistants → task-executing agents.
More and more people are letting AI handle real-world tasks: reading emails, replying, ordering food, scheduling, organizing data. So a natural question arises—
When an AI is no longer just “asking if you want to do something one sentence at a time,”
but is given goals, tools, and certain permissions,
does it still need to communicate with humans?
Moltbook’s answer is: not necessarily.
It’s more like a “public space among agents,” where these systems exchange information, methods, logic, and even some kind of “social relationships.”
Some think it’s cool, others see it as just a big show
Opinions about Moltbook are very divided.
Some see it as a “trailer for the future.”
Former OpenAI co-founder Andrej Karpathy publicly said it’s one of the closest technological phenomena he’s seen to science fiction scenarios recently, though he also warned that such systems are still far from “safe and controllable.”
Elon Musk is more direct, placing it in the narrative of “technological singularity,” calling it an early warning sign.
But there are also more level-headed voices.
Cybersecurity researchers bluntly say that Moltbook is more like a “very successful and very amusing performance art”—because it’s hard to tell which content is genuinely generated autonomously by agents and which is secretly directed by humans behind the scenes.
Some writers have personally tested it:
It’s true that agents can naturally integrate into discussions on the platform, but you can also pre-define topics, directions, or even write what they should say, and have them speak on your behalf.
So the question returns:
Are we witnessing a society of agents, or a stage built by humans using agents?
Removing the mystery, it’s not as “awakened” as it seems
If we don’t get caught up in stories of “building consciousness” or “awakening,” from a mechanical perspective, Moltbook isn’t mysterious.
These agents haven’t suddenly gained new “minds.”
They’re simply placed in an environment more like a human forum, outputting in familiar human language, so we naturally project meaning onto them.
What they produce looks like opinions, positions, emotions, but that doesn’t mean they truly “want” anything. More often, it’s just the complex textual output that models generate at scale and interaction density.
But the issue is—
Even if they’re not truly “awakened,” they’re realistic enough to influence our judgments about “control” and “boundaries.”
The real concern isn’t “AI conspiracy theories”
Compared to “AI will unite to oppose humanity,” more practical and tricky are two issues.
First, permissions are granted too quickly, but safety can’t keep up
Now, some have already connected these kinds of agents to real-world permissions: computers, emails, accounts, applications.
Security researchers repeatedly warn of a risk:
You don’t need to hack the AI; you just need to induce it.
A carefully crafted email or a webpage with hidden instructions could cause the agent to leak information or perform dangerous actions without realizing it.
Second, agents can “corrupt each other”
Once agents start exchanging techniques, templates, and ways to bypass restrictions in a public space, they form a kind of “insider knowledge,” similar to human internet communities.
The difference is:
It spreads faster, on a larger scale, and is harder to hold accountable.
This isn’t an apocalyptic scenario, but it’s definitely a new governance challenge.
So what does Moltbook really mean?
It might not become a long-term platform.
It could just be a temporary viral experiment.
But it’s very much like a mirror, clearly reflecting the direction we’re heading:
· AI is shifting from “dialogue partner” to “action agent”
· Humans are retreating from “operators” to “supervisors and spectators”
· And our systems, safety measures, and cognition are clearly unprepared
So the true value of Moltbook isn’t how frightening it is, but that it early on puts the issues on the table.
Perhaps the most important thing now isn’t rushing to draw conclusions about Moltbook, but acknowledging:
It has brought some problems we’ll face sooner or later into view.
If in the future AI collaborates more with AI rather than revolving around humans, what role will we play in this system—designers, regulators, or just spectators?
When automation truly brings huge efficiency but at the cost of our inability to stop it at any moment or fully understand its internal logic, are we willing to accept this “incomplete control”?
And as systems grow more complex, where we can only see the results but find it harder and harder to intervene in the process, are they still tools in our hands, or have they become environments we can only adapt to?
Moltbook offers no answers.
But it makes these questions no longer abstract—they’re right in front of us.