AI Products & PlatformsIndustry & Competition

OpenAI and Cursor both turned to Plugins. This is what they were really after.

In the same window of April-May 2026, OpenAI and Cursor both shifted focus from Skills to Plugins. OpenAI removed its flagship frontend skill from the official list and repackaged it as a Codex Plugin. Cursor launched a plugin-builder on its marketplace, encouraging developers to write plugins for the Cursor ecosystem. Alongside the GPT-5.5 Instant release, the timing made it look like a coordinated product update.

But the surface action (moving from Skill to Plugin) was driven by entirely different forces at each company. The problems they need to solve, and the survival threats they face, barely belong in the same framework.

The same diagnosis

If you have ever written or installed an AI skill, you may have felt it intuitively: this thing is hard to charge money for. The April 24 article “Skills carry a suicide gene” broke that intuition into three layers.

First, a skill is a plaintext file. Once a buyer pays, they can paste it on GitHub and the second buyer never comes. Zero reproduction cost means zero pricing power. Second, try hosting it as a managed service? You end up reselling AWS, because users can always download the file and run it locally for free. Third, the data flywheel doesn’t work either – execution happens inside the user’s LLM call, data goes into the provider’s logs, and the skill author gets neither money nor feedback to improve.

The result: community marketplaces like Agensi and skills.sh tried to distribute skills but all hit the same wall – once the buyer has the file, they no longer need the platform. The problem isn’t execution quality. The product form itself can’t sustain a business model.

Plugins are the prescription OpenAI and Cursor each wrote for this diagnosis.

What Plugins add that Skills don’t

OpenAI Codex Plugins and Cursor Plugins agree on what needs fixing.

A Skill is a .md file. A Plugin adds three things.

First, a runtime environment. A Plugin doesn’t just instruct the AI on what to do – it declares what runtime it needs. Codex Plugins use agents/openai.yaml to specify UI and dependencies; Cursor Plugins use MCP server configs to declare external tools. The user doesn’t get a file; they get a capability that runs on the platform. This gives the platform a reason to charge: I host the runtime for you.

Second, credentials. Plugin authentication runs on the platform side – OAuth tokens, API keys, database connection strings are all managed inside the Plugin’s hosted environment. The platform’s revenue point shifts from “selling a file” to “selling a service chain.” This is essentially the “hosted route” the suicide gene article analyzed, but now with distribution and discovery as added value.

Third, distribution. Skills rely on GitHub and community lists (awesome-claude-skills, etc.) where the author’s job ends at publication. A Plugin catalog provides ongoing distribution – users search, install, and update from within the tool. If Skill is a business card, Plugin catalog is LinkedIn: your capability is always on the shelf, and someone might walk by at any time.

Together, these three things turn knowledge into a service. Knowledge, once transmitted, cannot be taken back. A service is ongoing – it needs runtime, credential maintenance, and updates. That’s where the revenue point lives.

OpenAI’s real motive: the execution layer defense

To understand why OpenAI built Plugins, you need to see the threat it faces in 2026.

By late 2025, model commoditization was unmistakable. Benchmark scores were converging, going from double-digit gaps to a few percentage points. Enterprises started adopting multi-model strategies, choosing vendors based on cost, speed, and integration rather than brand loyalty (The Economist). Meanwhile OpenAI’s costs were growing with revenue – frontier training runs cost hundreds of millions to billions, and inference scales directly with usage.

The risk: if models become a commodity, OpenAI slides from “platform definer” to “one of many AI providers” – influential, but squeezed between costs and competition. A Forbes piece captured OpenAI’s response in one line: “Interface-layer dominance drives usage and subscription revenue. Execution-layer dominance shapes operational dependency and long-term lock-in.”

In plain terms: selling intelligence at the model layer earns you revenue. Running workflows at the execution layer earns you lock-in. Codex and its Plugin system are OpenAI’s execution-layer bet.

The numbers show the scale. Codex went from 30,000 weekly active developers in late 2025 to 3 million in April 2026 – a 100x increase. Nearly half of Codex usage is now non-coding tasks. It is moving from a developer tool to a general-purpose work execution environment. The Plugin catalog is the app store for this environment: the more plugins a user installs, the deeper their workflow dependency on Codex, even if the underlying model changes.

OpenAI isn’t building Plugins to make direct money from them. It wants to build an execution layer above the model layer, and bind users to workflows rather than to models. Plugin is just one piece of that strategy – making users feel “I use Codex’s ecosystem,” not “I use GPT-5.5’s intelligence.”

Cursor’s real motive: escaping the supplier trap

Cursor faces a different but equally existential threat.

Think of a simple business problem: what happens when your wholesaler opens their own store? Cursor builds an AI editor and also trains its own models (the Composer series) – but its training starts from an open-source base model, not full pre-training from scratch. Composer 2 uses Kimi K2.5 as its foundation, with continued pretraining and RL on top (Cursor Composer 2 analysis). This means Cursor depends on third-party models from Anthropic and OpenAI, while simultaneously competing with Anthropic’s Claude Code. TechCrunch called Claude Code Cursor’s biggest competitor.

The more urgent number: until late 2025, Cursor was losing money on every sale. The cost of calling third-party models exceeded what it could charge users. Gross margins were negative. Only after launching its own Composer model in November 2025 and adding low-cost alternatives like Kimi did it achieve marginal positive gross margins – and only on enterprise accounts; individual developer accounts still lose money.

Put together, Cursor’s motive for Plugins becomes clear. It needs a differentiation layer above the model layer – a part of the developer workflow that is uniquely Cursor’s and cannot be migrated seamlessly to Claude Code or Codex. This differentiation layer is the ecosystem built from Plugins + .cursorrules + MCP integrations. If a user’s code review pipeline, database schema management, and deployment workflow are all wired through Cursor plugins, switching to Claude Code is no longer a one-click decision.

This explains why Cursor’s Plugin system emphasizes MCP integration and .mdc rule files – these are deeply bound to the editor, unlike SKILL.md which can be carried across platforms.

Same answer form, different exam questions

OpenAI’s threat is model commoditization. Its response is an execution layer above the model layer. Plugin is part of that execution layer – making Codex the place where work actually happens.

Cursor’s threat is supplier replacement. Its response is an ecosystem layer above the model layer. Plugin is part of that ecosystem layer – making Cursor’s editor experience non-replicable.

Both companies do not expect Plugins to generate significant direct revenue. Plugin is a strategic tool, not a product. This follows familiar patterns: browser extension markets don’t make money directly, they make the browser the desktop gateway; WordPress plugin ecosystem doesn’t take a cut, it makes WordPress dominate the CMS market.

The difference: OpenAI has full control from model to execution layer, so its Plugins can reach deeper into runtime capabilities (like background computer use). Cursor’s control ends at the editor layer, so its Plugins rely more on cross-platform protocols like SKILL.md and MCP to fill capability gaps. This drives their Plugin systems in different directions: OpenAI can bind more capabilities to Codex’s runtime, while Cursor must maintain compatibility across multiple models and tools.

The answer: OpenAI and Cursor have different objectives, but their Plugin strategies are compatible for now. OpenAI builds an execution layer; Cursor builds a differentiation layer. They use overlapping tools (MCP, SKILL.md, Plugin), but from different starting points. This compatibility means developers don’t need to pick sides in the short term, but the pressure to diverge is real.

A structural guess: will Plugins converge?

Looking two years ahead, the two Plugin strategies could diverge or converge.

Divergence is natural: Codex Plugins become increasingly tied to Codex’s runtime (background computer use, cross-session automations, chat-specific UI), while Cursor Plugins become tied to editor-specific features (.cursorrules injection, Composer interaction patterns, MCP GUI management). Each platform develops advanced Plugins that run best on its own turf.

Convergence is also possible: SKILL.md and MCP are still evolving rapidly. If SKILL.md extends to cover Plugin metadata (dependency declarations, auth config, UI specs), and MCP covers the runtime binding that Plugins need, the difference between Plugin and Skill shrinks to “who handles distribution” – a layer that could standardize, with platforms differentiating only on discovery and payment.

Which path wins depends on cash flow pressure. All three players are currently in investment mode – Anthropic Enterprise Marketplace charges 0% commission, GPT Store never delivered on its revenue share promise, and Codex Plugin has no public split either. The longer the investment period lasts, the less incentive platforms have to wall off their Plugin ecosystems, because community sharing grows the ecosystem faster. But if one player discovers Plugins as an independent revenue engine, the incentive to standardize weakens.

No one knows how long the window will stay open. But writing a skill file that works across 30 tools (Noqta) is likely safe for the rest of 2026.

Research date: 2026-05-07. Key sources: The Economist: OpenAI faces make-or-break year, Forbes: Codex Agents Running Data Platform, TechCrunch: Cursor $50B valuation, Noqta: SKILL.md Adoption, Skill Suicide Gene, OpenAI Codex Update, Digital Applied: Codex Plugin Analysis