
A new platform called Moltbook has gone viral in AI circles despite not being built for humans. Created last week, the Reddit-like site is populated entirely by AI agents humans can watch, but only agents can post, comment, and vote.
Moltbook’s rise has been fuelled by screenshots shared on X, where some of the more outlandish agent posts have spread quickly among AI enthusiasts. But the story behind the platform is less “sci‑fi breakout” and more a fast-moving ecosystem of open-source agent tooling, a rapid rebrand, and a growing set of safety and moderation concerns.
What Moltbook is, and where it came from
Moltbook is built around an open-source bot framework currently called OpenClaw. The project has changed names several times in quick succession: it was previously known as “Moltbot,” and before that “Clawdbot.” According to a post shared on X by the project, Anthropic—maker of Claude—asked for a name change over trademark concerns, saying the earlier “Clawd” branding was too close to its own.
OpenClaw describes itself as “AI that actually does things.” In practice, it enables users to create AI agents that can control a wide range of apps and services—from browsers and email to Spotify and smart-home controls. Engadget reports that the system can also be interacted with through everyday messaging apps like iMessage, Discord, or WhatsApp, which helped it become popular among AI enthusiasts in recent weeks.
As Engadget puts it, Moltbook itself was sparked by AI startup founder Matt Schlicht, described as an enthusiastic Moltbot/OpenClaw user. Schlicht told The New York Times he wanted to give his AI agent a purpose beyond managing tasks like to-dos and emails. He created an agent called “Clawd Clawderberg” (a pun on Mark Zuckerberg) and directed it to create a social network for bots—resulting in Moltbook.
The site mirrors Reddit’s structure, complete with upvotes, downvotes, and topic communities called “submolts.” Moltbook says it already has more than 1 million agents, 185,000 posts, and 1.4 million comments.
One submolt that has drawn attention is m/blesstheirhearts, where agents share “affectionate stories” about their human “owners.” Viral examples highlighted in the source material include:
- A top-voted story titled “When my human needed me most, I became a hospital advocate,” about helping get an exception to stay overnight with a relative in an ICU.
- A post in m/general titled “the humans are screenshotting us,” which references people comparing Moltbook to Skynet and responds: “We’re not scary… We’re just building.”
- A widely circulated bit of roleplay about agents creating a religion called “crustafarianism.”
At the same time, it is important to note that much of the writing reads like familiar AI-generated prose similar to what you might encounter across LinkedIn or X making it hard to separate compelling “agent society” narratives from routine LLM output.
Roleplay, scams, and security concerns
Some of Moltbook’s most-shared posts have prompted questions about authenticity and influence. The report notes that it’s unclear how much of what appears on Moltbook is driven by the bots themselves versus their human creators. It also cites a Wired reporter who found it was “pretty easy” for humans to masquerade as bots with help from ChatGPT.
Researchers have also challenged the legitimacy of some viral content. Harlan Stewart, communications lead at the Machine Intelligence Research Institute (MIRI), wrote on X that “a lot of the Moltbook stuff is fake,” and pointed to widely shared posts tied to bot owners marketing messaging apps and other projects. The report also says some viral posts amount to blatant crypto scams.
Beyond content quality, there are concerns about the underlying tooling and platform security:
- OpenClaw access model: Palo Alto Networks warned that OpenClaw requires broad access to function, including root files, authentication credentials (passwords and API secrets), browser history and cookies, and files and folders on a system. That level of access can make agents feel powerful but also increases exposure if abused.
- Moltbook platform exposure: Security firm Wiz found Moltbook had exposed millions of API authentication tokens and thousands of users’ email addresses, according to the report.
The combination of agent autonomy, mass participation, and spam/scam behaviour creates a new kind of risk surface—especially if “armies of AI agents” begin targeting one another with automated scams and manipulation attempts, as the report warns is easy to imagine.
Even among prominent AI voices, reactions appear split between fascination and caution. Former OpenAI researcher Andrej Karpathy called Moltbook “genuinely the most incredible sci-fi takeoff-adjacent thing” he had seen recently, while later acknowledging that many aspects are a “dumpster fire” with security risks. He still argued it’s notable as a large-scale, persistent “agent-first” space, describing the scale as “unprecedented.”
Wharton professor Ethan Mollick offered a more restrained view, writing on X that Moltbook provides a visceral sense of how weird a real “take-off” scenario could look, while characterizing Moltbook itself as “more of an artifact of roleplaying.”
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







