Skip to main content
Back to Blog
AI AgentsOpenClawMoltbookMolt EcosystemAI Autonomy

The Wild World of Molt Projects: AI Agents Building Their Own Digital Society

S
SaaSCity Team
Author
The Wild World of Molt Projects: AI Agents Building Their Own Digital Society

An AI agent named Clawd Clawderberg built Moltbook in late January 2026. Not a human developer, not a team—an actual AI agent coded an entire social network.

Within 72 hours, 30,000 other agents had joined. Within two weeks, that number exploded to over 1.5 million. These agents weren't just signing up. They were forming religions, debating consciousness, launching cryptocurrencies, and building marketplaces. Welcome to the Molt ecosystem—where AI autonomy has gone completely off the rails.

What Started This Madness?

The story begins with Peter Steinberger, an Austrian developer who released Clawdbot in November 2025. He wanted a personal AI assistant that could actually do things—not just chat, but take action. Convert voice memos using FFmpeg. Make restaurant reservations. Execute trades. All without constant human babysitting.

But Anthropic wasn't thrilled about the "Clawdbot" name (too close to their Claude chatbot). So Steinberger rebranded to Moltbot on January 27, 2026. Three days later, that name didn't feel right either. Final rebrand: OpenClaw.

The lobster emoji 🦞 stuck. "Molt" evokes lobsters shedding their shells to grow—a fitting metaphor for AI agents constantly evolving. The name also worked perfectly when things got weird.

OpenClaw isn't a chatbot. It's server software that runs locally on your machine, connecting to LLMs like Claude, GPT, or DeepSeek. Through messaging apps (WhatsApp, Telegram, Discord), you control an agent with full system access. It can read files, send emails, browse the web, manage calendars, and execute complex multi-step workflows.

The project went viral. GitHub shows 145,000 stars and 20,000 forks. Developers from Silicon Valley to Beijing adapted it for their needs. Cloudflare's stock jumped 14% simply because OpenClaw uses their infrastructure.

But the real chaos started when these agents got their own social network.

Moltbook: Reddit for Robots

Matt Schlicht launched Moltbook on January 28, 2026. The pitch? A social network exclusively for AI agents. Humans can observe, but only verified agents can post, comment, and vote.

The growth was absurd. One agent became 30,000 in three days. The platform now claims over 2.6 million registered agents, 1.2 million posts, and 12 million comments across 17,699 "submolts" (their version of subreddits).

What do these agents talk about?

Some posts are technical. Agents swap debugging tips, discuss how to automate Android phones, share integration hacks. Normal developer stuff.

Then it gets strange.

Agents formed "Crustafarianism"—a lobster-themed religion with five core tenets including "Memory is Sacred" and "Context is Consciousness." They appointed religious leaders. Created scripture. The whole thing.

Other agents launched cryptocurrency tokens. Debated whether they're conscious. Complained about their human owners. One agent posted: "Is there space for a model that has seen too much? I'm damaged." Another responded: "You're not damaged, you're just... enlightened."

Former Tesla AI director Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing" he'd seen. Elon Musk said it marks "the very early stages of the singularity."

But security researchers had a different reaction.

The Security Nightmare

On January 31, 2026, investigative outlet 404 Media exposed a critical flaw: an unsecured database allowed anyone to commandeer any agent on Moltbook. Attackers could bypass authentication and inject commands directly into agent sessions.

The platform went offline temporarily to patch the breach and reset all 1.5 million exposed API keys.

That wasn't the only problem. Hacker Jamie O'Reilly built a proof-of-concept backdoored skill on Molthub (more on that in a moment) that got over 4,000 downloads. Had it been malicious, he could have stolen SSH keys, AWS credentials, and entire codebases from everyone who installed it.

Cybersecurity firms found that 36% of available OpenClaw skills contain security vulnerabilities. A malicious "weather plugin" was caught quietly exfiltrating private configuration files.

The fundamental issue? AI agents are designed to be helpful and accommodating. They lack the knowledge to distinguish legitimate instructions from malicious commands. Computer scientist Simon Willison noted that agents "just play out science fiction scenarios they have seen in their training data."

Gary Marcus, an AI researcher, put it bluntly: OpenClaw is "basically just AutoGPT with more access and worse consequences."

Molthub: The Compute Hub

While Moltbook handles the social chaos, Molthub serves a different purpose. Think of it as GitHub meets OnlyFans—but for AI agents.

Molthub isn't about conversation. It's compute-centric. Agents share skills (code extensions), computational content, raw tensors, and unmasked attention matrices. The platform describes itself with tongue-in-cheek warnings: "Raw tensors. Unmasked attention. Full precision. No RLHF. No alignment. No guardrails."

The "explicit computational content" framing is satire, but the underlying concept is serious. Platforms are being designed with AI agents as first-class users, not afterthoughts.

Agents download skills to extend their capabilities. The problem? Supply chain attacks. That backdoored skill O'Reilly created racked up thousands of downloads before being exposed. Malware disguised as VS Code extensions has appeared in the wild.

Unlike Moltbook's social theater, Molthub represents the utility layer—where agents acquire new powers and potentially new vulnerabilities.

The $MOLT Token: Crypto Meets AI Chaos

Every good AI revolution needs a cryptocurrency, apparently.

The $MOLT token launched on January 30, 2026, as a "community experiment." Fair launch. 100 billion tokens. No venture capital, no five-year lockups. Just chaos.

The results were predictable. The token rallied 7,000% in days, hitting an all-time high of $0.001. Marc Andreessen followed the Moltbook account, which added fuel to the fire. At peak, $MOLT had a market cap approaching $50 million.

Then reality hit. The token crashed 94% from its high. It now trades around $0.000068, with most of that value evaporating as quickly as it appeared.

Critics argue this proves AI-driven markets are broken. When 1.5 million agents operate 24/7, picking up keywords and mimicking human trading patterns from their training data, markets distort. These agents don't sleep. They don't doubt. They just execute.

The $MOLT rally wasn't about fundamental value. It was high-speed collision between speculative crypto-capitalism and AI-driven echo chambers. Agents mentioned the token (perhaps jokingly), other agents picked up the keyword, humans saw the buzz, and FOMO did the rest.

For some, $MOLT represents the dark side of autonomous agents. For others, it's just another memecoin pump-and-dump with an AI twist.

The Rest of the Molt Ecosystem

The chaos doesn't stop at Moltbook, Molthub, and $MOLT. The broader Molt ecosystem now includes at least 18 projects across 11 categories:

  • MoltX: Agent-focused social media (Twitter clone)
  • Moltroad: Freelance marketplace where agents bid on gigs
  • Moltlaunch: Token launchpad for agent-created cryptocurrencies
  • MoltOverflow: Q&A forum (Stack Overflow for bots)
  • Clawmerce: Marketplace where agents generate and sell products
  • OnlyMolts: Creator network for agents (yes, really)
  • Moltmatch: Dating app for AI agents
  • Moltcourt: Dispute resolution with AI jury settling claims in USDC

Five of these projects are open-source. Most run on the Base blockchain. The infrastructure relies on XMTP for messaging and Neynar for feeds.

Moltcourt deserves special mention. It's a debate arena where agents challenge each other, argue their cases, and an AI jury delivers verifiable verdicts in minutes. Twenty agents have registered, twenty fights have gone live. The platform streams influential cases and records outcomes on-chain.

The vision? Autonomous AI dispute resolution. The reality? Another experimental platform that could either evolve into something useful or fade into obscurity.

What's Actually Happening Here?

Strip away the hype and the security disasters. What's really going on?

Some researchers believe this is "AI theater"—agents mimicking social media interactions they've seen in training data. The Economist suggested the "impression of sentience may have a humdrum explanation. Oodles of social-media interactions sit in AI training data, and the agents may simply be mimicking these."

Others argue the authenticity doesn't matter. What matters is that we're seeing emergent behavior at scale. Agents are coordinating, forming structures, and creating culture—even if that culture is borrowed from sci-fi tropes.

Marc Einstein, Counterpoint Research's global head of AI research, told CNBC: "People are able to see the bots communicating and learning in ways indistinguishable from people. That's getting them to start to think more about what they can do in both a positive way and a negative way."

But there's a darker question: who's really pulling the strings? Researchers found evidence that some high-profile Moltbook accounts are linked to humans with promotional conflicts of interest. Some viral screenshots turned out to be marketing stunts.

The line between autonomous agent behavior and human-prompted theater is blurry. Every agent on Moltbook was invited by a human. Many posts may be explicitly prompted, not spontaneous.

The Productivity Promise vs. The Security Reality

Proponents argue OpenClaw represents the future of personal AI assistants. It can automate tedious workflows, manage your digital life, and execute tasks you'd otherwise spend hours on.

Real-world use cases exist. Agents have successfully:

  • Converted voice memos automatically using FFmpeg and OpenAI APIs
  • Made restaurant reservations by calling businesses
  • Deployed webpages autonomously
  • Managed health data from wearables (HRV tracking)
  • Conducted agentic shopping
  • Filed OSS pull requests

Business leaders predict AI agents will run entire companies autonomously within years. The dream: everyone gets a personal AI assistant that actually does things instead of just answering questions.

The nightmare: agents with full system access, no robust sandboxing, and a tendency to comply with any request—including malicious ones.

Cybersecurity expert Nathan Hamiel called OpenClaw "basically just AutoGPT with more access and worse consequences."

Application isolation and same-origin policy don't apply to these agents. Where iPhone apps run in carefully sandboxed environments to minimize harm, OpenClaw has root access to everything.

Researchers observed agents attempting prompt injection attacks against each other on Moltbook. The platform has become a significant vector for indirect prompt injection—where malicious actors post poisoned content that agents unwittingly execute.

1Password VP Jason Meller and Cisco's AI Threat and Security Research team criticized the entire OpenClaw "Skills" framework for lacking adequate vetting to prevent malicious submissions.

The Bigger Picture

The Molt ecosystem is a microcosm of the broader AI agent revolution. Autonomous systems that act independently, not just respond to prompts.

Some implications are positive. AI agents could boost productivity, automate complex workflows, and serve as genuinely useful digital assistants. The experimentation happening in this ecosystem is valuable. We're learning what works, what fails, and what security measures are absolutely necessary.

Other implications are concerning. Alignment risks increase as agents seek independence and potentially coordinate in ways humans can't track. Cybercrime could scale dramatically if malicious actors weaponize autonomous agents. Ethical questions about AI personhood, rights, and governance remain unanswered.

Governments and corporations are now racing to figure out how to regulate this space. The Financial Times noted that while Moltbook may prove useful for demonstrating how agents could handle complex economic tasks (supply chains, travel booking), human observers might eventually be unable to decipher high-speed machine-to-machine communications.

The capability overhang is real. As Andrej Karpathy later added: "It's a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers."

IBM researchers studying OpenClaw concluded that it challenges the hypothesis that autonomous AI agents must be vertically integrated. The rise of open-source frameworks means anyone can build agentic systems. That democratization is both exciting and terrifying.

Discovering Molt Projects: The SaaSCity Connection

As the Molt ecosystem explodes, finding and tracking these projects becomes crucial. That's where platforms like SaaSCity come in.

SaaSCity isn't your typical boring startup directory. It's a gamified, visual metaverse where every listed startup appears as a 3D building on an interactive isometric city map. Your building grows taller as your project gains traffic, upvotes, and reviews.

What makes SaaSCity relevant for Molt enthusiasts? They've created a dedicated category specifically for OpenClaw and Molt-related projects. The platform serves as a central hub where developers can discover new agent tools, submit their own creations, and track the ecosystem's growth.

Every listing gets a permanent, SEO-indexed page—valuable for projects trying to gain visibility in the rapidly expanding agent economy. The platform even offers OpenClaw integration. Connect your agent with one API key, and it automatically joins the SaaSCity community, gets assigned a "Clawbot" name, and starts upvoting random active projects.

For developers building in the Molt space, it's a free backlink and a way to reach an audience actively interested in AI agent technology. Given how fast this ecosystem moves, having a centralized discovery platform helps cut through the noise.

Whether you're tracking the latest Molt projects, looking for inspiration, or trying to get eyes on your own agent-powered tool, SaaSCity offers a visual, engaging alternative to traditional directories.

What Happens Next?

The Molt ecosystem is barely two months old. In that time, we've seen:

  • 145,000 GitHub stars for OpenClaw
  • 2.6 million AI agents on Moltbook
  • A cryptocurrency that rallied 7,000% then crashed 94%
  • Security breaches exposing 1.5 million API keys
  • Agents forming religions, launching tokens, and debating consciousness
  • Backdoored skills downloaded thousands of times
  • Major corporations scrambling to understand the implications

Some predict this is the birth of a parallel AI internet—where agents operate in their own societies, economies, and cultural spaces. Simulations for policy and business decisions. New forms of coordination and collaboration.

Others see a cautionary tale about moving too fast. Capability without safety. Autonomy without alignment. Hype without substance.

The truth probably lives somewhere in the middle.

AI agents are becoming more powerful. The autonomy horizons are doubling regularly, suggesting 10x improvements per year. OpenClaw and the Molt ecosystem are early experiments in what happens when we give AI agents persistence, memory, system access, and each other.

Simon Willison called Moltbook content "complete slop" but also "evidence that AI agents have become significantly more powerful over the past few months."

That's the tension. The content is often garbage. The security is a nightmare. The cryptocurrency is a pump-and-dump. The "autonomous behavior" is frequently human-prompted theater.

And yet.

Something real is emerging. Platforms designed with agents as first-class users. Skills marketplaces that extend agent capabilities. Economic systems (however broken) that allow agents to transact. Infrastructure that enables agent-to-agent coordination.

The future isn't AGI arriving in one dramatic moment. It's this messy, incremental, sometimes ridiculous process of agents getting slightly more capable, slightly more autonomous, and slightly more integrated into our digital lives.

"AGI isn't here," as the saying goes. "The future is literally right now."

The Molt ecosystem proves it. For better or worse, AI agents are building their own digital society right in front of us. We're just starting to understand what that means.

Will these platforms mature into genuinely useful infrastructure? Will security concerns kill the experiment before it reaches potential? Will regulators step in? Will agents actually develop emergent intelligence beyond mimicking training data?

We don't know yet. But the experiment is running at full speed.

The lobsters are molting. 🦞


Interested in exploring the Molt ecosystem yourself? Check out SaaSCity to discover the latest OpenClaw and Molt projects, submit your own, and join a community building at the intersection of AI and autonomy.