AI AgentAI Products & PlatformsChina Tech Ecosystem

Why OpenClaw's Technical Burden Became a Distribution Asset

OpenClaw’s onboarding flow would make almost any product manager frown. You configure API keys, choose models, set permissions, build the environment, and connect IM channels. It is far more involved than clicking a sign-up button. Every step can fail, and every step can make someone close the terminal. By normal product logic, this is failed onboarding. Users should churn on the first screen.

What happened was the opposite. People who ran into those problems started taking screenshots, forwarding them, and writing tutorials. The installation process itself became shareable content. The same friction is a product weakness inside a technical evaluation system; in social distribution, it becomes an initiation ritual. More steps create a stronger sense of entry. More errors make completion feel more real.

The most obvious reason is that AI can now do work. An AI that only chats is not worth this much setup. But Manus can also do work, with a more complete interaction model and smoother onboarding, and it did not become the same kind of social phenomenon. Capability gave OpenClaw permission to be discussed. Something else turned that discussion into a phenomenon.

Capability Is Only the Ticket

The shift from chatting to acting does explain why agentic AI became a major narrative in this cycle. Since ChatGPT, users have grown used to AI that can talk but cannot act: it answers one prompt at a time, cannot call tools, and cannot push tasks forward by itself. OpenClaw hit that gap directly. GitHub describes it as “Your own personal AI assistant. The lobster way.” Science and Technology Daily summarized it as an active assistant that “can do things and get work done.”

Being able to do work explains why agent products are valuable. Manus came earlier, felt more complete, and looked more like a product. OpenClaw’s Product Hunt tagline says “The AI that actually does things,” a phrase that could easily describe Manus as well. Claude Code and Deep Research have also proven that agents can get work done.

Capability gave OpenClaw the right to be “raised.” Without that ability, it would be just another chatbot deployment tutorial, with no “intelligent object” to possess. But what separated OpenClaw from Manus was another layer: whether users could perceive it as their own.

From Using a Service to Owning an Intelligent Object

The difference between Manus and OpenClaw lies in the relationship users form with the AI.

Manus is a remote task platform. You submit a task, and it returns a result. What you own is account access; the agent itself remains inside the platform. Its social expression is relatively narrow, usually “I used Manus to finish X.” Sometimes people share Manus outputs, and impressive outputs can travel, but the content always returns to product capability: how well Manus performed, whether the result was good, and how fast it was.

OpenClaw works differently. It is open source software that can be deployed into the user’s own environment. Users place it on their own computer, cloud server, Feishu, WeChat, or official account. Even if it still depends on third-party model APIs and cloud servers, the perceived relationship is: “it is working on my territory.” That perception creates a crucial language shift. In Chinese, the nickname Xiaolongxia means little lobster, so “my lobster,” “the lobster I raised,” and “my digital employee” all sound natural. “My Manus” sounds much less natural, because Manus is a platform account and lacks the object quality of something personally owned.

This sense of ownership changes what gets shared. When you use a remote service, you share what the service helped you do. When you own an intelligent object, you can share that fact itself: I already have a lobster. That does not depend on result quality. Whether the lobster worked today, and whether it worked well, does not affect the social fact of “I raised a lobster.”

So the core of OpenClaw’s distribution is possession status. Manus distribution depends on demos and quality evaluation, with access as the gate: waitlists, invites, and payment. OpenClaw distribution depends on the fact of possession and the process of raising it, with possession as the gate: installation, deployment, integration, and tuning. Access lets you show “I got in.” Possession lets you show “I own an object.” Access is one-time. An object can be raised, displayed, and compared over time.

Why Ownership Creates Showability

Belonging to me is not enough. Many things are scarce when they first appear, and many things can be personally owned, but they do not automatically become social content. You own your computer, but you do not post “this is my computer” every day. A computer expands your ability, but it remains a tool waiting for your operation. OpenClaw adds another layer: it appears to act on its own, make mistakes, need care, and take commands. The object being shown becomes an actor brought into your own environment.

OpenClaw became easy to show off because three conditions came together.

First, it is an object that can be “raised.” The scarcity of a new tool is usually short-lived; once everyone registers, it ends. The scarcity of raising a lobster can persist, because each lobster connects to different accounts, models, environments, and tasks, and because each user’s investment differs. Terms like lobster, raising shrimp, killing shrimp, outsourced raising, and token burn became social material because behind the name sits an object between tool, pet, and employee. When a friend asks, “Have you raised a lobster?”, the question is whether you have entered this new relationship.

Second, it leaves visible traces of effort, and those traces are social signals.

The configuration work at the beginning is genuinely annoying. API keys, model choices, permissions, environments, and IM channels can fail at every step. In a technical evaluation system, that is a UX weakness: the flow is complex, and users will drop off. Once it enters a social evaluation system, the same thing changes meaning. The more tedious and error-prone the setup is, the more likely successful users are to talk about it. Error messages, terminal screenshots, model-switching logs, and token-spend screenshots are failure evidence at the software layer, but proof of passage at the social layer. Showing them says: “I crossed this threshold.”

What matters here is the filtering function of the threshold. A zero-friction product requires no proof to own, so ownership itself carries little social information. A product that takes multiple steps to deploy automatically filters out some people. Those who remain gain a shared identity: we are the people who can raise lobsters. The product does not need to define this identity. The threshold creates it.

The technical burden is a liability at the product layer and an asset at the distribution layer. Each additional installation step reduces the number of people who can finish deployment on their own, but every successful story that gets shared increases the signal value of the act. Cost and reward move in opposite directions: the higher the technical cost, the more visible the social return. More importantly, invested time changes the relationship between user and lobster. A service you registered for casually rarely creates attachment. An agent you spent hours getting to run is easier to describe as something “I raised.” The product flaw creates social value at every step.

Third, it has dimensions for comparison. Which model your lobster runs, how much it costs, what it can do, whether it crashed, and whether someone else installed it for you all turn software use into social comparison. “My lobster worked all day for only ten yuan” and “your lobster burned two hundred yuan in one day” create conversation. “I use Kimi,” “mine runs on DeepSeek,” and “I raised mine on Tencent Cloud” give people with different resource levels different positions to show.

Accident and Inevitability

This event has both inevitable and accidental parts, but they operate at different layers.

The inevitable part is that agentic AI needs an outlet that can be privatized and shown. Products like Manus, Claude Code, and Deep Research educated the market first. Users began to believe that “AI can do work for me” is the next product form. Once that imagination matures, some users will want to move beyond using AI inside a platform and toward owning their own AI. The drive comes from identity, control, and visible participation. People want service results, but they also want a relationship that others can see. OpenClaw happened to absorb this demand. It did not invent it. It became the first low-cost form that could spread widely.

The accidental part has two layers.

First, OpenClaw happened to acquire the Chinese nickname Xiaolongxia, literally little lobster. Zhidx defined “raising a lobster” as deploying and using OpenClaw to build an intelligent assistant that can automatically handle office work, creation, and programming tasks. Science and Technology Daily also explained the origin of the name: OpenClaw’s icon looks like a lobster, and users started calling the training process “raising a lobster,” after which the term moved from developer circles into broader public view. OpenClaw is the project name; Xiaolongxia is the object name in Chinese social language. Installing OpenClaw is setup. Raising a lobster is relationship-building. This naming translates technical deployment into a nurturing action, and it gives failure and friction a narrative container: the lobster runs around, burns money, crashes, or needs to be killed. As software bugs, these things are irritating. As lobster-raising stories, they become discussable.

Second, other accidental conditions arrived at the same time. The installation threshold was neither too high nor too low: high enough to create social qualification, low enough for tutorials and paid setup services to cross. Domestic cloud vendors and large platforms also followed the heat early, turning a niche deployment behavior into a public event. 21st Century Business Herald reported nearly a thousand people lining up in Shenzhen to install it at a Tencent Cloud event. Alibaba Cloud’s developer community connected OpenClaw, Alibaba Cloud servers, Feishu, model APIs, and official-account auto-publishing into a complete tutorial. China City News and 53AI both mentioned paid installation services. Name, threshold, tutorials, paid setup, and cloud vendor participation all mattered. Remove any one of them, and the distribution path would narrow.

Agentic AI needs to be privatized; that trend was likely to appear sooner or later. The fact that this particular privatization vehicle was OpenClaw, and that it acquired the Chinese nickname Xiaolongxia, involved a strong element of chance.

Where It Goes Next

OpenClaw itself may fade. Technology changes quickly, the experience is rough, token costs are high, and competitors are catching up. It may not be the agent product that ultimately remains. But the question it raised will not disappear: once AI moves from a tool to an intelligent object that individuals can own, competition expands from capability into ownership, identity display, and the ecosystem of intermediaries around it.

Seen along this line, Nous Research’s Hermes Agent is a clearer post-OpenClaw path. It also emphasizes local-first, open source, and personal AI assistance, but its slogan has moved from “owning an assistant” to “The agent that grows with you.” The focus shifts from deployment to long-term accumulation: preferences, skills, and context can persist through use. OpenClaw asks, “Can I own an agent?” Hermes points to, “Can this agent grow with me?”

The lesson for other products is that technical decisions also change distribution. Where deployment happens, who owns permissions, whether the object can be named, whether effort is visible, and whether users can keep tuning it all affect whether an AI product is treated as a service to call or an object to own. OpenClaw is hard to copy because its name, timing, threshold, and social context all involved accident. But the judgment framework remains useful: the next round of agentic AI competition may be about who can do more work, and also who can more easily make users feel, “this is mine.”