The Lobster That Broke the Internet (And What It Tells Us About the Future of Computing) + my harm reduction guide if you're planning to run it

Nate's Notebook • February 02, 2026 • Solo Episode

View Original Episode

Guests

No guests identified for this episode.

Description

This is a free preview of a paid episode. To hear more, visit natesnewsletter.substack.com Somewhere in the Valley right now, a developer is buying a Mac Mini specifically to give an AI agent root access to their digital life. They’re not alone. Developers are snapping up Mac Minis for dedicated always-on hardware. Cloudflare’s stock jumped over 20% in two days last week. And Peter Steinberger, an Austrian developer who built a personal AI assistant as a hobby project three months ago, is now fielding harassment from crypto scammers, fixing critical security vulnerabilities, and watching his creation get called “infostealer malware in disguise” by Google’s VP of Security Engineering. The project is called Moltbot**. Until last Monday, it was called Clawdbot. The name change wasn’t voluntary—Anthropic’s legal team saw to that. What followed was 72 hours of chaos: a ten-second window that let crypto scammers hijack the project’s identity, a $16 million rugpull token, security researchers finding over a thousand exposed instances with plaintext credentials, and over 100,000 GitHub stars that made it one of the fastest-growing open-source projects on GitHub. This is either the future of personal computing or a collective hallucination. Possibly both. Here’s what’s inside: * What Moltbot actually is. The architecture, the capabilities, and why “AI that actually does things” is both the value proposition and the risk. * Why Wall Street noticed. How a GitHub repo moved Cloudflare’s stock 20% and what that signals about the next phase of the AI trade. * The 72 hours that changed everything. Trademark disputes, account hijacking, security disclosures, and lessons in operational security that will be studied for years. * The security problem that has no solution. Why the vulnerabilities aren’t bugs—they’re intrinsic to what agentic AI requires. * Should you run it? An honest assessment based on who you are and what you’re willing to risk. Let me show you how a lobster broke the internet—and why the most interesting question isn’t whether you should run Moltbot, but whether agentic AI can ever be made safe at all. ***A note: I recorded the video for this piece late last week, when the project was still called Moltbot. In the days since, it rebranded again—to OpenClaw. I have a Part 2 coming that will cover what’s emerged. Consider this the origin story. Subscribers get all posts like these!

Audio