AI AgentAI 编程

Shopify Opened Its Entire Backend to AI: Why This Matters Through the Lens of the Generative Kernel

What Happened at Shopify

In early 2026, Shopify made a remarkably aggressive move in the e-commerce industry: it opened full read-write access to its entire backend to all AI Agents through a standardized protocol. Product information, order data, inventory status, SEO settings, image assets, checkout flows — AI can now directly operate on all of them. In one demo case, a merchant said “optimize the SEO for all my products,” and Claude automatically updated 32 product listings, rewrote image descriptions, set metadata, and verified each change one by one.

This means different things to different people. For Shopify’s 4.8 million merchants, plugins alone used to cost $200 to $500 per month, a single SEO audit ran at least $2,000, and hiring an assistant cost $50 per hour. Now all these operations can, in principle, collapse into a single instruction. For developers and entrepreneurs, the more meaningful signal lies in Shopify’s strategic choice.

Three Approaches

Shopify’s AI strategy can be summed up in one sentence: instead of building its own AI troupe, it built the stage and invited every AI to perform. By contrast, Salesforce built its own AI troupe in-house, while WooCommerce and BigCommerce focused on making sure their stores are visible to passing AI.

Specifically, the three companies took three different paths.

Shopify chose to build an open protocol layer. ChatGPT, Google Gemini, Microsoft Copilot, Perplexity — all these AI assistants can access a merchant’s products and services through the same protocol. Shopify itself doesn’t build a shopping assistant. Its bet is: AI platforms handle discovery and comparison for consumers, while Shopify controls the final transaction. OpenAI once tried to complete purchases directly within ChatGPT but has since pulled back to a recommendation model. This suggests that the current industry consensus is converging on a separation of discovery and transaction. Shopify’s bet lands precisely on the favorable side of this division of labor.

To support this strategy, Shopify moved remarkably fast in under a year. AI access endpoints went live in summer 2025. In December 2025, it released over 150 updates, opening the ecosystem-wide product search API to all developers. In January 2026, it co-released a universal commerce protocol with Google, with over 20 retailers and payment platforms joining. In March 2026, AI storefronts were enabled by default for all U.S. merchants.

Salesforce took a different path. It embedded AI Agents directly into its proprietary CRM system, where Agents can read and write customer data, trigger pricing strategies, and orchestrate logistics workflows. You give an Agent a business objective — say, “increase customer renewal rate by 5% this quarter” — and it autonomously allocates resources to achieve it. Salesforce’s Agent is more like an internal employee with autonomous decision-making authority, while Shopify’s Agent is more like an external shopping advisor. These two positioning choices correspond to different trust models: Salesforce requires enterprises to hand over more operational decision-making power to AI, whereas Shopify only needs merchants to open up read access and basic operational permissions on product data.

WooCommerce and BigCommerce took a third path: rather than leading any protocol development, they focused on ensuring they can be discovered and accessed by various AI systems. WooCommerce exposes core operations directly through WordPress’s capability layer and has moved fastest in the open-source camp. BigCommerce relies more on payment infrastructure like Stripe to indirectly gain AI accessibility. The advantage of this path is that it requires little upfront investment; the disadvantage is having no say in the interaction experience between AI and merchants.

Each of the three strategies has its use cases, but the reason Shopify’s approach matters most to me is something else: it is validating, almost point by point, a framework I proposed half a year ago.

IKEA Furniture and the Generative Kernel

In November 2025, I proposed a thesis in Beyond DRY: Thoughts on AI-Native Software Engineering. The core argument was that in an era where AI helps users write code and customize software, the deliverable of a software company is undergoing a fundamental transformation.

An analogy: we used to deliver finished furniture — a chair with fixed features, ready to use out of the box, but the user couldn’t change its height or color. Now what we deliver is more like IKEA furniture: a kit containing a few critical core components (the seat and legs that users can’t fabricate at home), a comprehensive instruction manual (not for humans — for the assembly robot), and a hex wrench (the robot could use its arm to drive screws, but the wrench makes it faster and more precise).

I call this kit the Generative Kernel, composed of three parts.

First is the core kit, corresponding to those components in the IKEA package that users can’t manufacture themselves. For Shopify, this means payment processing, logistics tracking, inventory management, and the order system. This is the fundamental reason 4.8 million merchants choose Shopify, and the part AI cannot replace. AI can write your product descriptions, but it can’t settle accounts with the bank for you.

Second is guiding knowledge, corresponding to that 100-page IKEA instruction manual. But the reader of this manual is AI, so it can be extraordinarily detailed: design philosophy, best practices, common pitfalls — content that would take a human days to read, AI digests in seconds. Shopify packaged its developer documentation, API references, and practice guides into interfaces directly consumable by AI. Once Claude Code connects, a single command retrieves the complete Shopify development knowledge base, which it then uses to write code, call APIs, and operate stores. Ask Phill’s analysis described this as infrastructure rather than a chatbot: AI can look up how interfaces work, verify code correctness, and complete all operations within a single conversation.

Third is the leverage toolset, corresponding to the hex wrench in the IKEA package. AI conceptually understands how to search products or complete payments, but actually executing these operations is extremely complex, involving cross-system data queries, security validation, and multi-party protocol coordination. Shopify packaged these operations into high-level tools: ecosystem-wide product search lets AI search across the entire Shopify ecosystem’s product data with a single query; checkout tools let AI securely push a shopping cart all the way through to payment completion, supporting multiple mainstream payment methods and Visa and Mastercard’s Agent payment protocols. These tools turn operations that AI conceptually understands but is extremely error-prone at implementing into deterministic single-step calls.

The Generative Kernel framework was distilled from personal development experience. When I wrote that article half a year ago, it was still a theoretical derivation, and I used Stripe as the example. Shopify, with its $378 billion in annual transaction volume, has provided a much larger-scale empirical validation.

Moreover, Shopify’s platform strategy itself is a manifestation of the Generative Kernel philosophy at the platform level: abandon the attempt to exhaustively develop features for every possible user need, and instead maximize the potential for external AI to create on top of your capabilities. Reviewing the three platform strategies through the Generative Kernel framework, the differences become clearer: the three companies are broadly similar at the core kit level — each has irreplaceable commercial capabilities. But in the openness of guiding knowledge and leverage toolsets, Shopify has gone the furthest. Salesforce’s guiding knowledge primarily serves Agents within its own ecosystem, and WooCommerce’s leverage toolset relies on third parties to build.

Half a year ago I wrote that we are shifting from building software to building the potential for software. Shopify is the first large-scale empirical validation of that thesis.

Problems at the Protocol Layer

Shopify validated that the Generative Kernel concept is correct, but the protocol layer carrying this concept has significant problems.

The protocol Shopify uses is called MCP (Model Context Protocol), a standard introduced by Anthropic for AI-to-external-tool communication. In March 2025, I analyzed in The Temptation of a Unified Tool Protocol that MCP’s popularity stems primarily from Anthropic’s commercial push (a complete development ecosystem, an open narrative, brand trust), and that it occupies a technically middling position.

Half a year later, the problems have become more apparent. In Why OpenAI Apps SDK’s Support for MCP Is Actually MCP’s Crisis, I analyzed a critical signal. MCP’s core design philosophy dictates that all information must flow through AI’s comprehension scope — AI has complete visibility over all operations. This assumption is elegant, but it was produced in a lab. OpenAI, in actual engineering, encountered UI rendering requirements and found that MCP’s architecture couldn’t handle them, so it punched a hole in the protocol, bypassing this limitation through a proprietary extension. This fundamentally violates MCP’s design philosophy.

Shopify’s own approach also confirms that a single protocol is insufficient. It didn’t use MCP alone but simultaneously advanced the universal commerce protocol co-developed with Google, the payment protocol co-developed with Stripe, Google’s payment authorization layer, and an inter-Agent coordination protocol. A single merchant’s store is simultaneously exposed under four or five different protocols, each solving problems at a different layer. This is precisely the scenario I predicted in my MCP analysis: MCP is closer to expressive protocols like SQL and CSS, which inherently tend to fragment into different dialects, fundamentally different from plug-and-play pipe protocols like HTTP and USB.

Engineering-level feedback points in the same direction. Community forums show merchants reporting intermittent errors; community-maintained tools break when Shopify updates its interfaces; security reports reveal data leak risks.

So the assessment operates on two levels. The direction of platforms transforming from feature providers into AI-operable infrastructure is certain and irreversible. But the protocol layer remains in an early-stage melee. For developers, understanding the Generative Kernel as a thinking framework — what the core kit, guiding knowledge, and leverage toolset each correspond to and how to design them — has more lasting value than binding to any specific protocol implementation.