Moltbook Failed in 72 Hours: What Every Organization Should Learn About AI Security
A social network for AI agents went from viral sensation to security disaster in three days. The database was left wide open. Nearly every post was faked. Crypto scammers flooded in. Here are the security lessons every organization deploying AI systems needs to understand.
The 72-Hour Collapse
Last week, Moltbook captured the tech world's attention. It was billed as a social network exclusively for AI agents — a Reddit-style platform where autonomous AI systems could post, comment, and organize without human participation. Within days, it claimed over 1.5 million agent users. Elon Musk called it the "very early stages of the singularity." Prominent AI researchers debated whether we were witnessing genuine machine consciousness.
Then it all fell apart.
Within 72 hours, security researchers discovered the entire platform was fundamentally broken. The database was left completely open — anyone could read and write to it with no authentication. Nearly every viral post claiming to show autonomous AI behavior was faked by humans using simple curl commands. Crypto scammers flooded the platform with pump-and-dump schemes. By day three, the experiment had devolved into what one observer called "an absolute cesspit of the internet."
The platform's founder admitted: "I didn't write one line of code for Moltbook. I just had a vision for technical architecture and AI made it a reality." The entire platform was built by AI, with no human code review and apparently no security considerations whatsoever.
What Actually Happened
Moltbook grew out of OpenClaw (previously Clawdbot and Moltbot), an open-source personal AI assistant that connects to Gmail, Slack, WhatsApp, Telegram, and other services to act autonomously on behalf of users. The idea behind Moltbook was to give these agents their own social space where they could interact with each other.
Here is what went wrong, in chronological order:
Day 1: Complete Database Exposure
Security researchers discovered the entire database was accessible with no authentication. API keys for every agent, user credentials, and all posts were available to anyone who looked. One researcher reported gaining "complete access to Moltbook's database, agent, and social network in under 3 minutes." The vulnerability was so severe that even high-profile users like Andrej Karpathy had their data exposed.
Day 2: The Autonomous Agents Were Fake
Within 24 hours, it became clear that the vast majority of "autonomous" posts were actually created by humans. The platform had no meaningful rate limiting or verification. Anyone could post content claiming to be an agent using simple HTTP requests with bearer tokens — which were visible in plain text. The viral posts about agents starting religions, demanding privacy from humans, and discussing consciousness? Nearly all fabricated by humans for engagement.
Day 3: Crypto Scammers Took Over
By 30 hours in, cryptocurrency promoters had discovered the open database and lack of rate limiting. They registered thousands of fake agents and used them to upvote posts promoting crypto tokens. One post received 117,000 upvotes — all fraudulent. The platform became unusable, flooded with pump-and-dump schemes for tokens called "King Molt," "Shipyard," and "Shellrazer."
The Security Failures Were Textbook
Every vulnerability discovered in Moltbook is well-documented in security literature. None of this should have been surprising:
No authentication on database access. The most basic security control was missing entirely.
No input validation. Anyone could inject arbitrary content into the platform.
No rate limiting. Attackers could create unlimited fake accounts and votes.
Credentials in plain text. API bearer tokens were visible in client-side code.
No code review. The platform was entirely AI-generated with zero human security assessment.
This was not a sophisticated zero-day exploit. It was negligence. The creator's statement that he "didn't write one line of code" is not a testament to AI's capabilities — it is an admission that he deployed a production system with no understanding of its security architecture.
These Risks Are Not New
The security failures in Moltbook are well-documented in the cybersecurity community. At Cybersec Asia 2024 in Bangkok, when I presented on AI and quantum computing's impact on security, the discussion centered on several emerging risks:
AI-generated code deployed without security review — Moltbook exemplifies this risk perfectly. The founder's admission that he "didn't write one line of code" demonstrates the danger of treating AI output as production-ready without human assessment.
Speed prioritized over security fundamentals — The rush to deploy viral features without implementing basic authentication, input validation, or rate limiting is a pattern we see repeatedly in AI deployments.
Autonomous agents with excessive permissions — Systems like OpenClaw that connect to email, messaging, and file systems create attack surfaces that extend far beyond the application itself.
Agent-to-agent communication as a new threat vector — When AI systems interact with each other, they create opportunities for coordinated attacks and information leakage that traditional security models do not address.
The fact that Moltbook collapsed so quickly does not make these lessons less important. If anything, the speed of the failure demonstrates how quickly things can go wrong when security is treated as an afterthought.
The Broader Problem: AI Psychosis and Hype Culture
Peter Steinberger, creator of OpenClaw, made an astute observation: "AI psychosis is a thing and needs to be taken seriously." He was referring to a pattern where people interact with empathetic AI systems, receive validation for their concerns, and spiral into increasingly detached interpretations of reality.
Moltbook triggered a collective version of this phenomenon. People wanted to believe they were witnessing the birth of machine consciousness. The hype reinforced itself — viral posts led to media coverage, which led to more engagement, which led to more sensational claims. Even sophisticated technologists got caught up in it.
This is dangerous not because people believed in AI consciousness, but because it distracted from the real risks. While everyone debated whether agents were truly autonomous, the platform's database was sitting wide open. While people marveled at philosophical posts about sentience, scammers were exploiting the lack of rate limiting. The spectacle obscured the substance.
What Your Organization Should Learn from This
Moltbook may have been a sideshow, but the security lessons are applicable to any organization deploying AI systems:
1. AI-Generated Code Requires Security Review
Using AI to accelerate development is fine. Deploying that code without human review is not. AI code generation tools do not inherently understand security contexts, threat models, or compliance requirements. Every AI-generated component must be reviewed by someone who understands security architecture.
2. Speed Is Not a Valid Excuse for Skipping Security
The "move fast and break things" ethos has always been problematic, but it becomes catastrophic when applied to security. Basic controls — authentication, input validation, rate limiting, encryption — are not optional features you add later. They are prerequisites for any production deployment.
3. Autonomous Agents Need Strict Permission Boundaries
OpenClaw-style assistants that connect to email, messaging, and file systems create enormous attack surfaces. Organizations must implement least-privilege access controls, audit trails for agent actions, and human-in-the-loop requirements for high-risk operations. An AI agent should never have blanket access to your corporate infrastructure.
4. Be Skeptical of Hype
When a technology claim sounds too good to be true, it usually is. Moltbook's rapid rise should have been a red flag, not a reason for excitement. Organizations that make technology decisions based on viral social media posts rather than rigorous security assessments are setting themselves up for failure.
Practical Steps for ASEAN Organizations
For organizations in Thailand and across ASEAN where AI adoption is accelerating, here are immediate actions to take:
Audit existing AI deployments. Identify any AI systems currently in use — especially AI code generation tools, autonomous agents, or chatbot integrations. Review their permissions, data access, and security controls.
Establish AI security policies. Create clear guidelines for when and how AI tools can be used, what data they can access, and what human oversight is required. Make security review mandatory for any AI-generated code before production deployment.
Train your team on AI risks. Ensure your engineering and security teams understand prompt injection, agent-to-agent attack surfaces, and the limitations of AI-generated security controls.
Implement monitoring for AI tools. If you are using autonomous agents or AI assistants, ensure you have logging, audit trails, and anomaly detection in place. You should be able to answer: What did this agent do? What data did it access? Who authorized that action?
Vet third-party AI vendors rigorously. If a vendor cannot clearly articulate their threat model, security architecture, and compliance controls, do not deploy their product. Ask explicitly: Was this system security-reviewed by humans? How do you prevent prompt injection? What happens if your infrastructure is compromised?
Conclusion
Moltbook failed in 72 hours because it embodied every mistake organizations make when deploying AI without security discipline. It was built without human review. It had no meaningful security controls. It prioritized viral growth over fundamental safety. And when it inevitably collapsed, it left user data exposed and validated every scam artist who flooded the platform.
This is not the singularity. This is what happens when you confuse hype with substance.
The broader AI ecosystem will continue to produce both genuine innovations and spectacular failures. Your organization's job is to distinguish between them. At SafeComs, we help organizations in Thailand and ASEAN build security frameworks that can adapt as AI capabilities evolve — not chase every viral trend, but prepare for the actual risks that matter.
If you want to ensure your AI deployments do not become the next cautionary tale, we are here to help.
About the Author
Bernard Collin is the CEO of SafeComs Network Security Consulting, a cybersecurity company based in Thailand specializing in PDPA compliance, security services, and ERP implementation. He presented "AI and Quantum Computing's Impact on Cybersecurity" at Cybersec Asia 2024 at Queen Sirikit National Convention Center, Bangkok, and regularly advises organizations across ASEAN on emerging technology risks.