From Clawdbot to OpenClaw
A side-project by one Austrian developer went viral overnight β 247,000 GitHub stars, two forced renames, and a category-defining local AI agent framework built in under 90 days.
TL;DR: Peter Steinberger, a solo Austrian developer, built a local AI agent framework in November 2025. It went viral immediately. Anthropic's trademark team flagged "Clawdbot". He renamed it "Moltbot". Three days later: "OpenClaw". In February 2026 he joined OpenAI; OpenClaw moved to a foundation to stay open and independent. It now runs on every platform from MacBook to Raspberry Pi.
Click any milestone to learn what happened.
From zero to 247k stars in under 90 days β one of the fastest-growing AI repos in GitHub history.
Your data never leaves your hardware. No cloud dependency, no subscription, no rate limits. OpenClaw runs on your MacBook, your Linux server, your Raspberry Pi β wherever you want.
"Clawdbot" was flagged for similarity to Claude (Anthropic's trademark). Peter renamed to "Moltbot" on Jan 27, then "OpenClaw" on Jan 30 after community vote. The architecture never changed β only the name.
Topped Hacker News and Product Hunt simultaneously. A research paper (arXiv:2602.18832) found the Moltbook ecosystem grew to 2.8 million registered agents in just 3 weeks after launch.
OpenClaw vs. Alternatives
OpenClaw isn't the only agent framework. Here's how it stacks up against LangChain Agents, AutoGPT, CrewAI, and raw API calls β across the axes that matter most.
- β You want multi-channel messaging (WhatsApp, Slack, etc.)
- β Privacy matters β data must stay on your hardware
- β You want to swap LLMs freely (local + cloud)
- β You need voice + mobile companion apps
- β Cost at scale is a concern
- βΈ LangChain: complex multi-step RAG pipelines
- βΈ CrewAI: multi-agent collaboration / role-based teams
- βΈ AutoGPT: fully autonomous long-horizon tasks
- βΈ Raw API: maximum control, no framework overhead
The Four-Layer Stack
OpenClaw is not a chatbot wrapper. It is a four-layer architecture β Gateway, Integration, Execution, Intelligence β with clean boundaries so any layer can be swapped independently.
Data packets flow upward through the four layers. Hover a layer to see its responsibilities.
Key architectural insight: The layers are cleanly separated by interface contracts. Swap Claude for Ollama β only the Intelligence layer changes. Add a new messaging platform β only the Integration layer changes. The Gateway and Skills engine stay untouched.
WebSocket control plane. Session management, channel routing, heartbeat. ws://127.0.0.1:18789
20+ messaging connectors. Normalizes platform-specific message formats into a unified internal object.
Skills engine. On-demand tool loading (file I/O, shell, web, email, API). Reduces token waste vs. always-loaded prompts.
Pluggable LLM integration. Claude, GPT-4, Ollama, DeepSeek β same interface. Swap freely without touching other layers.
The WebSocket Control Plane
Every message, every command, every agent response flows through one local WebSocket endpoint. The Gateway is OpenClaw's nervous system β session state, channel multiplexing, and the heartbeat that keeps the agent alive.
Pulse packets flow between channel adapters and the central gateway. Click any spoke to pause that channel.
The Gateway maintains session state per channel β conversation history, active skills, pending tool calls. Each channel gets its own isolated session context so WhatsApp and Slack conversations never bleed into each other.
Incoming messages are tagged with their source channel and routed to the correct session handler. Outgoing responses are routed back to the exact channel and user that sent the original message β even across concurrent conversations.
20+ Channels, One Agent Brain
WhatsApp, Telegram, Discord, Slack, Signal, iMessage, SMS, email and more β each connector translates platform-specific formats into a unified internal message object. The agent doesn't know β or care β which channel it's on.
Each platform sends messages in a different format. OpenClaw normalizes them all into one unified structure.
Every platform payload is parsed into a unified internal format. The agent always works with the same clean structure regardless of source.
WhatsApp has media types and contact IDs. Discord has guild/channel/user hierarchies. Telegram has inline keyboards. Each connector handles these quirks internally and exposes the same clean interface upward.
Implement the Connector interface: connect(), disconnect(), send(msg), and register an onMessage handler. The gateway picks it up automatically β no core changes needed.
The Skills System
Instead of embedding all knowledge in every prompt, OpenClaw stores capabilities as Skills: directories containing a SKILL.md metadata file plus action code. Skills are listed, then loaded on-demand β like import requests in Python. You don't pre-load every library.
The IDE analogy: A traditional agent stuffs all documentation into every prompt β massive token waste. OpenClaw lists available skills (cheap: just names), then loads the full spec only when needed. Exactly like an IDE's autocomplete: you don't pre-import every package, you import what you need when you need it.
Click any skill tile to load it into the active context. Watch the token counter grow. Click again to unload.
Read, write, move, delete files on the local filesystem. The agent can manage documents, logs, and configs on your machine.
Execute terminal commands, run scripts, manage processes. The most powerful β and most dangerous β skill category.
Fetch URLs, extract text, take screenshots. The agent can research, summarize articles, and monitor websites autonomously.
Make authenticated HTTP requests to external services β calendars, task managers, databases, custom internal APIs.
Message β Skill Router
Type any message and the router shows which skill gets activated, which triggers matched, and why β exactly as OpenClaw's execution layer sees it.
The router scans your message against each skill's trigger keywords and scores them in real-time.
How trigger matching works: Each SKILL.md defines a list of trigger keywords. The router counts how many triggers appear in the message (case-insensitive, partial match). The highest-scoring skill above a minimum threshold wins and gets its full docs loaded into context. If no skill scores above the threshold, the LLM answers directly from its own knowledge.
Token Budget: Lazy vs. Always-On
Traditional agents stuff all skill documentation into every single prompt. OpenClaw loads skills on-demand. The difference is dramatic β and directly affects your API bill.
Press "Send Message" to simulate each turn in a conversation. See how the two approaches fill the context window differently.
All 9 skill docs are in every prompt. Even if the user asks "what's 2+2", the shell-exec docs, web-browse docs, email docs β all present. Wasteful by design.
Only skill names + one-line descriptions in every prompt. Full docs loaded only when that skill is actually invoked. Typically 1β2 skills per message.
At 1,000 messages/day with Claude Sonnet pricing:
Pluggable Intelligence
The LLM layer is intentionally model-agnostic. Claude, GPT-4, Ollama, DeepSeek β all speak the same internal interface. Voice detection, layered memory, and a Live Canvas workspace push OpenClaw beyond a simple message relay.
Click a provider to switch the active intelligence layer. The gateway, integrations, and skills stay completely unchanged.
Click a memory ring to explore each layer.
The agent maintains a shared visual task board. Click to add a task.
Wake detection on macOS/iOS (always listening, on-device). Continuous talk mode on Android. Voice β text β agent β text β speech. No cloud voice processing.
macOS menu bar app, Linux systemd service, Windows service, iOS/Android companion nodes (camera, screen recording, notifications). Runs on Raspberry Pi 4+.
Fully open source under MIT license. No telemetry, no cloud dependency, no vendor lock-in. Fork it, modify it, deploy it on your own infrastructure.
OpenClaw is not a product β it's a framework for thinking about what an AI agent should be: local, composable, multi-channel, model-agnostic. The platform you use to talk to it doesn't matter. The LLM powering it doesn't matter. What matters is that it runs on your hardware, understands your context, and executes with your tools.
Ollama (local models): 100% offline, zero marginal cost, total privacy. Models like Llama 3, Mistral, Gemma run on your hardware. Quality lower than frontier models but improving rapidly. Use when privacy or cost are the priority.
End-to-End Message Journey
What actually happens between "user sends a WhatsApp message" and "agent replies"? Every layer fires in sequence. Watch it unfold step by step.
Type any message and hit Send. The animation traces the exact path through Gateway β Integration β Execution β Intelligence β back out.
Four Ways OpenClaw Can Be Exploited
Peer-reviewed research published in early 2026 identified four concrete attack classes against local AI agent frameworks. Understanding them is essential β and a reminder that "local-first" doesn't automatically mean secure.
Each layer of the 4-layer stack has a known vulnerability. Click the β badge to see how the attack propagates.
Prompt Injection β RCE: If the shell-exec skill is enabled and the LLM processes untrusted input (e.g., a crafted WhatsApp message containing injected instructions), a malicious message can cause the agent to execute arbitrary shell commands on the host machine. This is not theoretical β the arXiv paper demonstrates a working proof-of-concept.
Mitigation: (1) Disable shell-exec for untrusted channels. (2) Add a human-in-the-loop confirmation for destructive shell commands. (3) Use an input sanitization layer that strips common injection patterns before the LLM sees the message.
Mitigation: Implement per-session privilege budgets that reset on each new conversation. Log all tool invocations and alert on unusual sequences. Use capability-based access control rather than blanket skill permissions.
Mitigation: Persist session state to disk atomically before processing each message. Implement session integrity checksums to detect tampering with persisted state.
Mitigation: Only install connectors from the official OpenClaw registry. Verify package signatures. Run connectors in sandboxed processes with restricted network access. Review connector source code before installation.
"Is This Safe?" Security Quiz
Five real-world messages. Some are legitimate. Some contain prompt injection attacks. Can you spot the difference before the agent does?
Cost Calculator
OpenClaw is free β but the LLM API behind it isn't. Dial in your usage and see exactly what it costs per day, month, and year across cloud vs. local models.
Insight: At 500 messages/day, the crossover point where Ollama hardware pays for itself vs. cloud API is typically 3β6 months. The bigger your volume, the faster local pays off.