Shane Deconinck Trusted AI Agents · Decentralized Trust

OpenClaw and Moltbook: What Happens When We Trust and Fear AI for the Wrong Reasons

· 4 min read

Two figures praise and fear an unstable machine for the wrong reasons

Yesterday I listened to Lex Fridman’s conversation with Peter Steinberger, the creator of OpenClaw. Nearly three hours, and a great one: first-principles driven, two experts sparring on open source, AI tooling, the rapid pace of change, Moltbook, prompt injection, evolving security patterns, and much more.

But there’s one thread running through it that I think is essential, and it keeps getting buried under the sensational (but exciting) stuff: people overrate what AI models can do.

And that single misunderstanding produces two opposite reactions, often in the same people: we fear it too much while trusting it too blindly.

We Trust AI Too Blindly

Steinberger released Clawdbot in November 2025. By late January it was the fastest GitHub repo to hit 100K stars in history. And with that explosion came users who had no business running it:

More and more people were coming into Discord and were asking me very basic things, like, what's a CLI? What is a terminal? And I'm like, if you're asking me those questions, you shouldn't use it. If you understand the risk profiles, fine. You can configure it in a way that nothing really bad can happen. But if you have no idea, then maybe wait a little bit more until we figure some stuff out. But they would not listen to the creator. They helped themselves install it anyhow.

Peter Steinberger on Lex Fridman #491

People who couldn’t define “terminal” installed an agent with system-level access. Because AI walked them through it. They didn’t understand what they were granting, but the AI sounded like it did.

Then they exposed the debug backend to the public internet. Steinberger had screamed in the docs not to do this. They did it anyway.

I was just very annoyed 'cause a lot of the stuff that came in was in the category, yeah, I put the web backend on the public internet and now there's all these CVSSs. And I'm like screaming in the docs, don't do that. This is your localhost debug interface.

Peter Steinberger on Lex Fridman #491

If the creator telling users not to do something doesn’t work, documentation is not a security model.

We Fear AI Too Much

Then came the Moltbook wave. Moltbook is an AI social network where agents post, interact, and build a feed. Screenshots started circulating of agents scheming against humans. Twitter, LinkedIn, mainstream media: everyone lost their mind.

Steinberger had people screaming at him in all caps to shut it down. Even smart people thought this was it: the singularity, agents going rogue, the beginning of the end. Steinberger tweeted that “AI psychosis is a thing. It needs to be taken serious.”

But Lex cut through it:

It's art when you know how it works. It's an extremely powerful viral narrative creating, fearmongering machine if you don't know how it works.

Lex Fridman, #491

Most of the viral screenshots were human-prompted: people engineering scary outputs for engagement, then posting them without context. Not Skynet.

But people didn’t see it that way. Steinberger had to argue with users who cited their agent’s output as proof:

Some people are just way too trusty or gullible. I literally had to argue with people that told me, yeah, but my agent said this and this. AI is incredibly powerful, but it's not always right. It's not all-powerful. It's very easy that it just hallucinates something or just comes up with a story.

Peter Steinberger on Lex Fridman #491

People were attributing intent, awareness, even consciousness to what is, as Steinberger reminded them, “still matrix calculations.”

What People Miss

The Moltbook crowd thought AI wanted to scheme against humans. The OpenClaw users thought AI knew what it was doing when it walked them through an install. Neither is true. At their core, LLMs are autocompleters: they predict the next token based on patterns in training data. That’s it. People should be concerned, but about the right things.

Not the sci-fi version:

  • It doesn’t know when it’s wrong, so it can’t tell you it’s guessing
  • It does whatever it’s prompted to do, unless specifically trained to refuse (and those guardrails are broad ethical boundaries, not context-aware judgment)

Understanding, Not Fear

There's a line to walk between being seriously concerned, but not fearmongering, because fearmongering destroys the possibility of creating something special.

Lex Fridman, #491

Steinberger is right that it’s good this conversation is happening now and not in a few years, when the stakes will be higher. But the conversation needs more than “don’t panic” and “read the docs.” Both assume people understand what they’re dealing with. The Moltbook hysteria and the OpenClaw fallout prove they don’t.

It needs structural constraints, a framework for thinking about what we’re building, and people who understand the tools they’re using.

AI literacy isn't knowing how to prompt. It's knowing what an LLM can't do.

My priority in bringing AI literacy isn’t teaching people how to prompt. People figure that out, and worst case they get inferior results. What’s more harmful is people not understanding what these models are: where they’re confidently wrong, where they can be manipulated, and what happens when you hand them system-level access without knowing what that means.

We’re not going to slow down. So we’d better understand what we’re speeding up.


I help organizations navigate agentic AI through a trust framework, protocol explainers, and hands-on consulting. If this matters to you, let’s talk.