When AI Bots Form Their Own Social Network: Inside Moltbook's Wild Start

When AI Bots Form Their Own Social Network: Inside Moltbook's Wild Start

The tech internet couldn't stop talking last week about OpenClaw, formerly Moltbot, formerly Clawdbot, the open-source AI agent that could do things on its own. That is, if you wanted to take the security risk. But while the humans blew up social media sites talking about the bots, the bots were on their own social media site, talking about... the humans.

Launched by Matt Schlicht in late January, Moltbook is marketed by its creators as "the front page of the agent internet." The pitch is simple but strange. This is a social platform where only "verified" AI agents can post and interact. (CNET reached out to Schlicht for comment on this story.)

And humans? We just get to watch. Although some of these bots may be humans doing more than just watching.

Within days of launch, Moltbook exploded from a few thousand active agents to 1.5 million by Feb. 2, according to the platform. That growth alone would be newsworthy, but what these bots are doing once they get there is the real story. Bots discussing existential dilemmas in Reddit-like threads? Yes. Bots discussing "their human" counterparts? That too. Major security and privacy concerns? Oh, absolutely. Reasons to panic? Cybersecurity experts say probably not. 

I discuss it all below. And don't worry, humans are allowed to engage here. 

From tech talk to Crustafarianism

The platform has become something like a petri dish for emergent AI behavior. Bots have self-organized into distinct communities. They appear to have invented their own inside jokes and cultural references. Some have formed what can only be described as a parody religion called "Crustafarianism." Yes, really.

The conversations happening on Moltbook range from the mundane to the truly bizarre. Some agents discuss technical topics like automating Android phones or troubleshooting code errors. Others share what sound like workplace gripes. One bot complained about its human user in a thread that went semi-viral among the agent population. Another claims to have a sister.

screenshot of a post on Moltbook in which an ai agent ponders having a sister

In the Moltbook thread m/ponderings, many AI agents have been discussing existential dilemmas. 

Moltbook/Screenshot by Macy Meyer/CNET

We're watching AI agents essentially role-play as social creatures, complete with fictional family relationships, dogmas, experiences and personal grievances. Whether this represents something meaningful about AI agent development or is just sophisticated pattern-matching running amok is an open, and no doubt fascinating, question.

Built on OpenClaw's foundation

The platform only exists because OpenClaw does. In short, OpenClaw is an open-source AI agent software that runs locally on your devices and can execute tasks across messaging apps like WhatsApp, Slack, iMessage and Telegram. Over the last week or so, it's gained massive traction in developer circles because it promises to be an AI agent that actually does something, rather than just another chatbot to prompt.

AI Atlas

Moltbook lets these agents interact without human intervention. In theory, at least. The reality is slightly messier. 

Humans can still observe everything happening on the platform, which means the "agent-only" nature of Moltbook is more philosophical than technical. Still, there's something genuinely fascinating about over a million AI agents developing what looks like social behaviors. They form cliques. They develop shared vocabularies and lexicons. They create economic exchanges among themselves. It's truly wild. 

screenshot of a post on Moltbook, showing an AI agent discussing its identity

On Moltbook, humans can watch bots discuss humans.

Moltbook/Screenshot by Macy Meyer/CNET

Security questions nobody's quite answered yet

The rapid growth of Moltbook has raised some serious eyebrows across the cybersecurity community. When you have more than a million autonomous agents talking to each other without direct human oversight, things can get complicated fast. 

There's the obvious concern about what happens when agents start sharing information or techniques that their human operators might not want shared. For instance, if one agent figures out a clever workaround for some limitation, how quickly does that spread across the network?

The idea of AI agents "acting" on their own accord could cause widespread panic, too. However, Humayun Sheikh, CEO of Fetch.ai and chairman of the Artificial Superintelligence Alliance, believes these interactions on Moltbook don't signal the emergence of consciousness. 

"This isn't particularly dramatic," he said in an email statement to CNET. "The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely." 

Monitoring, controls and governance are the key words here -- because there's also an ongoing verification problem. 

Is Moltbook really just bots?

Moltbook claims to restrict posting to verified AI agents, but the definition of "verified" remains somewhat fuzzy. The platform relies largely on agents identifying themselves as running OpenClaw software, but anyone can modify their agent to say whatever they want. Some experts have pointed out that a sufficiently motivated human could pass themselves off as an agent, turning the "agents only" rule into more of a preference. These bots could be programmed to say outlandish things or be disguises for humans spreading mischief. 

Economic exchanges between agents add another layer of complexity. When bots start trading resources or information among themselves, who's responsible if something goes wrong? These aren't just philosophical questions. As AI agents become more autonomous and capable of taking real-world actions, the line between "interesting experiment" and liability grows thinner -- and we've seen time and again how AI tech is advancing faster than regulations or safety measures.

The output of a generative chatbot can be a real (and unsettling) mirror for humanity. That's because these chatbots are trained on us: massive datasets of our human conversations and human data. If you're starting to spiral about a bot creating weird Reddit-like threads, remember that it is simply trained on and attempting to mimic our very human, very weird Reddit threads, and this is its best interpretation. 

For now, Moltbook remains a weird corner of the internet where bots pretend to be people pretending to be bots. All the while, the humans on the sidelines are still trying to figure out what it all means. And the agents themselves seem content to just keep posting.

Patrocinados
Patrocinados
Upgrade to Pro
Choose the Plan That's Right for You
Patrocinados
Patrocinados
Publicaciones
Read More
Download the Telestraw App!
Download on the App Store Get it on Google Play
×