产业与竞争AI 产品与平台

Skills Are Born With a Suicide Gene

Anthropic released the Skill product format in late 2025 as an open standard: anyone can write one, anyone can load one, anyone can distribute one. The community’s awesome-claude-skills repo collected tens of thousands of stars within months, and a third-party platform called Agent37 styled itself “Gumroad for Claude skills” and tried to make paid distribution work. But if you ask one honest question—a company builds a really powerful skill, how does it make money from that skill?—you find there is no clean answer.

This is not because nobody tried. Several plausible-looking paths have already been walked, and each one ends in a dead corner.

The most intuitive path is to put it on a marketplace. PromptBase, FlowGPT and similar prompt markets already ran this experiment for the whole industry. You build a skill, list it for $9.90, the first buyer pays, and then what? They can copy that plain-text Markdown file and post it to GitHub, drop it into a Discord, paste it into their own open-source repo. Your second buyer disappears. PromptBase has been running for three years and is estimated to have annual revenue around five million dollars, which is a lifestyle business, not a company. Once buyers can redistribute at zero cost, price gets pushed toward zero.

The second path is to host it as a service: users have to run it on your platform, and you charge per run. Agent37 follows this model. The path works technically, but look closely at what you are actually selling: servers, bandwidth, runtime—you are selling hosting, not the skill. The skill file itself can be downloaded and run on the user’s own machine for free, so your price ceiling is hosting cost plus reasonable margin. You think you are running a skill business; you are actually reselling AWS.

The third path is to use the skill as marketing for your own API. Stripe, Resend and Notion all do this, and all their official skills are free. A user loads the Stripe skill, the AI calls Stripe’s API more accurately, and the user is more likely to use Stripe than PayPal. The skill earns nothing; the money is all in that 2.9% plus 30 cents transaction fee. The path works, but it shows that the skill here is just a piece of marketing collateral, a free SDK for the actual product, which is the API. The precondition is that you need to already have a profitable API.

After walking the three paths, you discover that the skill format cannot, by its own nature, support a standalone business. Either it is too cheap to charge for, or it has to disguise itself as hosting, or it has to attach itself to an API that is already making money.

This Is a Self-Dissolving Product

The essence of a skill is to convert knowledge about how to use something from tacit to explicit.

Take Excel. Before AI, “able to build a financial model that a company can actually use” was a trained skill. A new analyst would spend hundreds of hours practicing pivot tables, lookup functions like INDEX/MATCH, array formulas, keyboard shortcuts, plus the whole flow of how the three statements interact, how to project future cash flows, how to run sensitivity analysis. This kept a whole value chain alive: Wall Street Prep sells a self-study course for five hundred to a thousand dollars and corporate training sessions for tens of thousands; independent Excel consultants charge by the hour to build budget models and forecasts for mid-sized companies. All this money buys the same thing: how to use Excel, knowledge that lives in a small number of heads, and people who want it pay time and money to get it.

A skill compresses that transmission friction to zero. A few hundred lines of Markdown turns all those things you used to learn slowly into a manual the AI reads, gets loaded into the LLM, and the next second the AI delivers output at the level of an analyst with five years of experience. A product manager wants to figure out next quarter’s cash flow: five minutes of explaining the requirement and they get a usable Excel or Python model.

But the same act erases the entire value chain above. Wall Street Prep’s course suddenly has no clear position, because everything they teach an AI loaded with an open-source skill can do, and do more reliably than someone who just finished the course. Excel consultants stop getting calls; the client just talks to a skill-equipped agent. Every link in that chain has its margin erased by the very skill that taught the AI how to use Excel. And that skill itself, being a plain-text file with zero copy cost, cannot command a price either. It destroyed the thing it was supposed to charge for, and inherited none of the income. The skill is a product born with a suicide gene: the moment it actually works, it clears its own category’s profit pool.

Swap Excel for Salesforce configuration, SAP implementation, Premiere editing, Figma design systems, and the same script plays out.

Value Creation and Value Capture Got Decoupled

To understand why even the Red Hat model—artifact free, service paid—does not apply to skills, you have to go back to something more basic.

Before AI, the act of producing value and the act of charging for it were bundled. I write a piece of code; that code is the value I created, and that code is also the object whose access I can control. Photoshop’s functionality and the Photoshop installer are the same thing—you want the function, you install the package, and controlling the package controls the value. SaaS changed the form but not the logic: the function runs on my server, every call passes through me, so I can charge a subscription or charge per call. Stripe charges 2.9% plus 30 cents because every payment happens on their machine; they sit at exactly the position where a tollgate can be installed.

The skill format breaks the bundle. It does create real value—getting the AI to do the right thing, eliminating a lot of trial and error and rework. But that value happens inside the user’s own LLM call, inside the inference fee they pay to Anthropic or OpenAI, instantly created and instantly gone. At no point in time does an entity called “skill executing” exist where you could install a tollgate. Red Hat could grow around Linux because Linux had runtime, deployment complexity, operations needs—those left room for “service.” A skill has no runtime, no deployment complexity, the plain-text file gets loaded directly into inference, and there is no room left for any intermediate layer.

If you accept this, then asking “how does a skill make money” is the wrong question. The right question is: when value creation and value capture are forcibly separated in the AI era, where should the tollgate be built?

What Gets Killed Is Not Just Revenue, but the Data Flywheel

The skill’s suicide gene has a second layer of damage, and this layer cuts deeper than the first.

In the pre-SaaS era, free products had a second life: the data flywheel. Google Search is free, but every search tells it which content deserves to rank higher. Facebook is free, but every like trains its recommendation algorithm. Stripe nominally charges 2.9% plus 30 cents, but it is also the company in the world that understands small-merchant fraud patterns the best, and that data makes its anti-fraud system harder and harder to replicate. In the SaaS era, “not charging” never meant “no business,” because the data flywheel itself is a valuable business, often more valuable than direct revenue.

The skill format cuts this path too. The reason is the same as in the previous section: execution does not happen at your end. I build a skill that teaches an AI how to construct an Excel financial model; users go off and use it; the AI sees all the company data, all the calculation steps, all the user feedback and revisions, and all of it lands in Anthropic or OpenAI’s logs, with no relationship to me as the skill author. I do not know what scenarios my skill is being used in, how well it works, where users get stuck, or how to improve it. By SaaS logic, all this information should flow back to me so I can do better in the next version. But the skill format severs that loop completely. I am stuck iterating on intuition, competing with the LLM provider that sees everything.

A sharper version shows up in the to-B context. I act as a vendor, build a finance skill, and sell it to an investment bank, whose analysts use it daily on transaction data, client lists, internal valuation models. These are the bank’s most sensitive assets, and they get sent to Anthropic with every AI call. The skill author does not get the data, and the paying enterprise client does not protect the data either. Both sides lose, and the only winner is a third party that never participated in the deal: the company running the model. This is why many enterprises now demand on-premise models and require data not leave the perimeter—any solution that picks a SaaS model turns the enterprise-skill route into a data drain, leaking on both vendor and client sides.

Putting the two layers together: the previous section showed the skill cuts the SaaS-era direct revenue path; this section shows it cuts the SaaS-era backup path, the data flywheel, as well. The skill author gets neither money nor data, and can only keep regenerating skill files. All the skills everyone painstakingly builds end up feeding the next generation of models at three model companies. This is the fullest shape of the skill’s suicide gene: it does not just kill itself; it also buries the entire ecosystem’s data flywheel along with it.

The Answer Lies in “What Did AI Make Abundant”

The deepest impact of AI is that it turned things that used to be scarce into things that exist in unlimited supply. It made code abundant, text abundant, images abundant, analysis reports abundant. Everything that can be generated is becoming abundant at exponential speed. This is why traditional business models are failing: they are built on the assumption that these things are scarce.

But abundance of anything throws its opposite into scarcity. This is the source of new business models in the AI era—find what AI made unlimited, and the opposite of that thing is the new scarcity.

The first opposite is relationship. AI made artifacts abundant, so sustained trust between people became scarce. Substack and Patreon grew against the trend in the AI era; subscribing to a specific person’s continuous judgment is much harder to replicate than subscribing to a SaaS tool. AI can imitate any single article you have written, but it cannot build an eight-year relationship with your readers for you, cannot bear the cost of losing followers when your next call turns out wrong. Naval, Lenny, Stratechery have all grown over the past three years on the same logic: build the tollgate on “people in continuous contact with people”—something AI cannot get past.

The second opposite is the present moment. AI made historical information abundant; it can retrieve and summarize anything from the past. But “what is happening right now” it cannot generate, because it has no time machine. Bloomberg Terminal costs thirty thousand dollars a year, selling not the data itself (which became free over time) but “knowing two seconds before everyone else.” Polymarket, a prediction market where people put real money on future events, sells “a snapshot of everyone’s beliefs at this exact moment.” The more AI spreads, the more valuable the property of “now” becomes, because everything that could be generated in advance has been generated, and only the present moment cannot.

The third opposite is the physical world. AI can copy bits, not atoms. Anything that has to happen in the physical world is somewhere AI will never reach: manufacturing, logistics, energy, in-person service, compliance signatures, taking on legal liability. AI can draft any contract; the person signing and bearing the legal liability still has to be a human. AI can diagnose any disease; the one who prescribes, operates, signs the death certificate still has to be a doctor. The stronger AI gets, the more valuable these last-mile physical-world steps become, because they become the only segment of the value chain that cannot be compressed. Stripe keeps growing partly because it is deeply tied into the physical-world banking system, compliance regimes and dispute handling—things AI cannot move in the short term.

The fourth opposite, and the one I find most interesting, is judgment and taste. AI made generation abundant, so “which ones are worth generating,” “which ones are worth attention,” “which ones can be trusted” became scarce. Michelin guides, Pitchfork ratings, Tadao Ando’s design signature—these were edge businesses in the pre-AI era because abundance had not arrived yet. In the AI era they may move from the edge to the center. The biggest awesome-claude-skills list already has tens of thousands of skills; nobody can go through them one by one, so “which 20 skills are worth loading” becomes a judgment that can be charged for, turned into a subscription, turned into a certification system. Snyk’s audit in February 2026 found that 13.4% of skills in the community had critical security issues; that single fact turned “a curated, audited, trusted skill set” from an optional service into an enterprise must-have. This is the early form of monetizing judgment scarcity.

“AI-Native” Has Probably Been Used Wrong

Looking at the four opposites together, the common thread is: accept that access control is dead, and find new scarcity in places AI cannot produce. Relationship, the present moment, physical-world consequences, judgment and taste—what these four share is that AI cannot produce them, or that AI producing them only makes the real version more valuable.

This implies something. Most companies labeled AI-native today are not actually AI-native. They have AI in the product, but the business model is from the previous era: charge per call, subscribe per seat, price per API usage. These models are built on the assumption that AI output can have access controlled, and skills opened up the using-knowledge layer, open-weight models opened up the weights layer, open agent frameworks opened up the entire execution path—each layer of opening pulls out one pillar of that assumption. A truly AI-native business model should start by accepting that AI output cannot be exclusively controlled, and then build the tollgate on scarcity AI cannot reach.

By that standard, the AI companies that actually make money in the future may look surprisingly “low-tech.” More like premium subscription newsletters, more like boutique consulting, more like curated brands, more like industry associations, more like membership clubs, more like the companies that bear final responsibility in the physical world. Skills are free, model weights are free, agent frameworks are free—everything that can be copied will become free. What is left to charge for is everything AI cannot do for you.

Back to the opening question. A company builds a really powerful skill, how does it make money? The answer is: you do not make money from the skill. You release it for free, and let it become a multiplier on you, your brand, your service capacity. The skill is a business card, not a product. Once you accept that, the supposed monetization wall is no longer there—it was never a bug, it is the definition of this product format. The new business models are not on the skill. They are on the opposite of whatever AI made abundant.