The US Now Views China's Open Source AI as an Independent Competitive Path

Date: 2026-03-23


On March 23, 2026, the U.S.-China Economic and Security Review Commission (USCC) released a research report titled Two Loops: How China’s Open AI Strategy Reinforces Its Industrial Dominance. Reuters’ coverage that day used a striking phrase: China’s open-source AI is creating a “self-reinforcing competitive advantage.”

If you only look at the headline, it’s easy to misinterpret this as another round of the “China’s AI is about to overtake the US” narrative. But the report’s focus lies elsewhere. It acknowledges that U.S. closed-source models remain “slightly ahead” in most evaluation dimensions. What truly unnerves U.S. policy circles is a different issue: China is using an entirely different path to compete for global AI influence, and existing U.S. policy tools—primarily chip export controls—have almost no leverage over this path.

For Chinese AI practitioners, this assessment deserves serious attention. It means the U.S. has officially begun evaluating open-source model distribution, API pricing, framework compatibility, and industrial deployment as an independent dimension of competition. This dimension goes beyond benchmark races and points toward a contest over who becomes the default choice for global developers.

What Happened

First, let’s clarify the nature of this report. The USCC is a permanent advisory body established by the U.S. Congress to track economic and security issues between the U.S. and China. However, Two Loops is a Research Working Group Paper, representing the analysis of the research team rather than the Commission’s formal annual report or policy recommendations. This distinction is important: it has analytical value but is not equivalent to the official stance of the U.S. government.

The report proposes a “two loops” framework to explain the expansion mechanism of China’s open-source AI.

The first is the digital loop: after an open-source model is released, global developers download, fine-tune, and build derivative models on top of it. Usage feedback then drives model iterations, attracting even more developers. This is a classic network effect story. The second is the physical loop: AI models are deployed in industrial scenarios like manufacturing, logistics, and robotics, generating real-world interaction data. This data feeds back into model improvements, which then allow the models to enter even more scenarios.

The USCC argues that these two loops are mutually reinforcing. Furthermore, it explicitly points out a policy awkwardness:

U.S. export controls primarily target the digital loop—restricting access to advanced chips used for frontier model training—but are not well suited to addressing the physical loop of deployment-driven data creation and accumulation across China’s manufacturing base.

In other words, chip controls target the training phase, but the global proliferation of Chinese models and the accumulation of industrial data happen during the deployment phase. Deployment typically uses distilled or quantized small models, which rely far less on frontier chips than pre-training does. Export controls have almost no impact on this path.

Qwen: The Most Compelling Case

Among all the Chinese models named in the USCC report, Alibaba’s Qwen series is the most well-documented case.

Qwen has opened its weights for an entire product line, ranging from lightweight models with 600 million parameters to large models with hundreds of billions. Global developers can fine-tune, distill, and perform secondary development on Qwen rather than just calling an API—this is the core difference in competitive logic compared to closed-source models.

On HuggingFace, Qwen’s proliferation data is quite remarkable. According to Xinhua News Agency, citing HuggingFace data, as of January 2026, the Qwen family has surpassed 700 million cumulative downloads. In December 2025 alone, its monthly downloads exceeded the sum of the models ranked second through ninth, overtaking Meta’s Llama to become the most downloaded open-source model series on HuggingFace. Regarding derivative models, the USCC report cites a figure of over 100,000 by the end of 2025, while subsequent third-party sources have updated this to over 180,000. Statistics from Nathan Lambert, founder of the ATOM project, show that Qwen derivatives now account for over 40% of new language model derivatives on HuggingFace, while Meta’s Llama has dropped to about 15% (MIT Technology Review, 2026-02-12).

Equally important is that Qwen is natively supported by mainstream inference and fine-tuning frameworks such as vLLM, HuggingFace Transformers, TensorRT-LLM, and LlamaFactory. This means there is almost no additional technical integration cost for developers using Qwen; it is already a first-class citizen in the developer toolchain.

Of course, download counts and the number of derivative models represent different things. The former reflects developer interest and trial behavior (including repeated downloads and automated scripts), while the latter reflects the depth of the technical ecosystem. Neither directly equates to market share in enterprise-grade production deployment. As the USCC report notes, “measuring use of AI models in applications is difficult compared to tracking model downloads and derivative uploads.” This point should be kept in mind when interpreting all subsequent data.

Price and Distribution: Supplementary Signals from Kimi and MiniMax

Beyond Qwen, Moonshot AI’s Kimi K2.5 and MiniMax M2.5 have less extensive evidence of global expansion, but they provide meaningful signals in two dimensions: price competitiveness and third-party platform accessibility.

On the pricing front, Kimi K2.5’s input token is priced at approximately $0.60/M, while Anthropic’s Claude Opus 4.6 is $5/M—a difference of about eightfold (nxcode.io). MiniMax M2.5 is even lower, with input at $0.30/M and output at $1.20/M. In independent benchmarks, its cost per run is about one-third that of Claude Sonnet (LLM Benchmark 2026). It’s worth noting that API pricing comparisons are highly dependent on specific use cases and token length distributions; these figures come from public pricing and third-party tests, and actual differences may be larger or smaller in practice.

In terms of distribution, third-party inference platforms like Together AI and Fireworks AI have already included Qwen, Kimi K2.5, and MiniMax M2.5 in their standard model libraries. On the OpenRouter platform, API calls for Chinese models surpassed those for U.S. models for the first time in February 2026 (CGTN, 2026-02-28). In the third week of February, four of the top five models by call volume on OpenRouter were from Chinese vendors, collectively contributing 85.7% (36Kr).

However, OpenRouter data should be interpreted with caution. It is an API aggregation platform for developers, and its user base leans toward price-sensitive individual developers and small teams. It does not represent the full picture of enterprise-grade deployment, nor does it reflect usage distribution on major cloud platforms like AWS, GCP, or Azure.

Why the US is Getting Nervous

With the evidence above, the reason the USCC report has garnered attention in U.S. policy circles becomes clearer.

In recent years, the main narrative of U.S.-China AI competition has revolved around two axes: whose model scores highest on the toughest tasks (frontier benchmarks) and who can mobilize the most compute to train the largest models (training compute). The U.S. still leads in both dimensions. The logic of export controls is also built on this framework: by restricting China’s access to the most advanced chips, its training capabilities can be limited.

The impact of the USCC report lies in its “two loops” analysis, which points out that competition may simultaneously occur on two other levels: whose model is most widely adopted by global developers and who can accumulate the most real-world data through industrial deployment. In these two areas, the leverage of chip controls is weak. As the report summarizes:

Open model proliferation creates alternative pathways to AI leadership.

The implication is that even if the U.S. maintains a lead in frontier capabilities, if Chinese models occupy the default position in the global developer ecosystem through open-source distribution, U.S. technical leadership will not automatically translate into industrial dominance or geopolitical influence.

Moreover, this concern is not an isolated event. As early as March 4, the USCC’s China Bulletin specifically analyzed the inference cost advantage of Qwen 3.5. The Office of the Director of National Intelligence (ODNI) also mentioned in its March 19 global threat assessment that China is “driving AI adoption domestically and internationally at scale” (Defense One, 2026-03-19). Vigilance regarding China’s AI expansion has moved from the realm of economic security into the scope of intelligence and defense assessments.

Industry Consensus or Independent USCC Judgment?

Looking only at the March 23 report, a reader might think this is a brand-new argument from U.S. policy circles. However, looking back at public statements from the industry over the past few months, the picture is different. A more accurate assessment is that the industry has long understood AI competition through dimensions like deployment, inference, ecosystem, and industrial integration. The USCC has now translated this language into policy and security analysis.

Jensen Huang is a key reference point. Over the past year, NVIDIA has repeatedly shifted the focus of AI competition from the models themselves to inference, system deployment, and physical AI. After GTC 2026, eWeek summarized Huang’s entire keynote as “AI is moving from model spectacle to systems economics.” This judgment is very close to the USCC’s “two loops”: the former says AI value is beginning to reside in continuous inference, agent software layers, and physical AI workloads, while the latter says China may form two self-reinforcing loops through open-source distribution and industrial deployment. While the phrasing differs, the underlying observation is consistent: everyone is shifting the focus of competition from “who trains the strongest model” to “who controls deployment and usage.”

Microsoft’s Satya Nadella has used similar language. In early 2026, he summarized the stage of enterprise AI as moving from “discovery” to “diffusion.” This word is crucial. It’s no longer about what capabilities a model first demonstrates, but whether those capabilities can enter real organizations, real processes, and real workloads. Amazon CTO Werner Vogels was even more direct: “AI capability is sufficient; the constraint is human-centered scaling.” This means model capability itself is no longer the sole bottleneck; what truly determines competition is who can integrate AI into human workflows, corporate systems, and environments that can run at scale.

Taking this perspective a step further, the actions of companies like AWS, OpenAI, and Google all prove the same thing. Andy Jassy has repeatedly emphasized in recent public statements that inference costs will be the lion’s share of the future. Sam Altman has made enterprise sales the top priority for 2026, and OpenAI has begun talking more about enterprise workflows, structured usage, and platform entry points. Google, meanwhile, is embedding Gemini as deeply as possible into Search, Workspace, Chrome, and Android. Their commercial paths differ, but none of them view competition as a simple benchmark race. The real battle is over default entry points, depth of usage, and deployment location.

Research institutions are also converging on the same conclusion. Gartner talks about task-specific agents and domain-specific models; McKinsey discusses the chasm from pilot to scale; Deloitte focuses on governance capabilities in the deployment phase. The language of these reports is more oriented toward corporate management, but the underlying meaning is very close to the USCC’s: the stage of demonstrating model capabilities has passed, and the next stage is about who can turn AI into a reproducible, distributable, and operable system.

In this context, it’s easier to locate the significance of the USCC report. It didn’t suddenly invent a brand-new competitive framework. Rather, the USCC translated a shift that the industry has been discussing for some time into the language of policy and national competition. The industry talks about inference economics, enterprise diffusion, ecosystem control, and physical AI. The USCC talks about the digital loop, the physical loop, and the blind spots of export controls. Different languages, same observation.

This is why the report deserves the attention of Chinese AI practitioners. It shows that U.S. policy circles are catching up to a judgment that previously existed mainly in the industry: in the coming years, AI competition will not just be about whose model is stronger, but also whose model is cheaper, easier to access, more widely adopted by global developers, and who can deploy models into larger-scale real-world scenarios. The USCC’s focus on China’s open-source AI isn’t just because it saw some platform data, but because this mode of competition has been proven by the industry to have real-world significance.

Of course, this doesn’t mean the industry and policy circles have reached completely identical conclusions. The industry cares more about ROI, deployment efficiency, and ecosystem positioning, while policy circles care more about whether this shift will weaken existing technical and geopolitical advantages. But if we ask whether this is a lone voice or a mainstream perspective, the answer is clearly the latter.

The Changing Logic of Competition

Returning to the original question: what does this mean for Chinese AI practitioners?

The core change revealed by the USCC report is that U.S. policy circles have begun using a new analytical framework to evaluate U.S.-China AI competition. In this framework, the dimension of competition has expanded from “who trains the strongest model” to “whose model is most widely adopted by global developers and industrial scenarios.”

This has two direct implications for Chinese AI practitioners.

First, the presence of Chinese open-source models in the global developer ecosystem is being taken seriously, but it is also being scrutinized as a policy issue. Qwen’s ecosystem status on HuggingFace, the penetration of Chinese models on inference cloud platforms, and API pricing far lower than U.S. closed-source models—these were originally normal results of market competition, but they are now being brought into the analytical lens of geopolitical security. Policy discussions targeting the distribution of open-source models may follow, and this warrants continuous monitoring.

Second, the report also serves as a reminder that adoption and frontier capability are two different things. Chinese models indeed have advantages in distribution speed, price, and developer entry barriers, but there are still significant gaps in enterprise trust, compliance auditing, hyperscale cloud distribution, and frontier task capabilities. Equating progress in adoption with overall leadership is the same mistake as equating benchmark leads with industrial dominance—both simplify competition into a single dimension.

The reality is that the dimensions of competition are increasing, and the respective advantages of the U.S. and China are concentrated in different areas. U.S. policy circles have only just begun to seriously face this reality, and that in itself is a change worth noting.


References

Primary Sources

  1. USCC, Two Loops: How China’s Open AI Strategy Reinforces Its Industrial Dominance, 2026-03-23. PDF | Report Page
  2. USCC, China Bulletin: March 4, 2026. Link

Media Coverage

  1. Reuters, “China’s open-source dominance threatens US AI lead, US advisory body warns,” 2026-03-23. Link
  2. Defense One, “US intelligence elevates AI as a top global threat in new report,” 2026-03-19. Link

Adoption and Technical Analysis

  1. MIT Technology Review, “What’s next for Chinese open-source AI,” 2026-02-12. Link
  2. Xinhua News Agency, “Alibaba’s Qwen leads global open-source AI community with 700M downloads,” 2026-01-13. Link
  3. AICerts, “Alibaba Qwen Model Downloads: Metrics and Enterprise Impact.” Link
  4. CGTN, “Chinese AI models overtake U.S. rivals in global token usage,” 2026-02-28. Link
  5. 36Kr, “February Sees Surge in AI Usage: China’s AI Call Volume Overtakes US,” 2026-02. Link

Pricing and Benchmarks

  1. nxcode.io, “Kimi K2.5 Pricing 2026.” Link
  2. LLM Benchmark 2026, “38 Actual Tasks, 15 Models.” Link
  3. morphllm.com, “Best AI for Coding (2026).” Link

鸭哥每日手记

日更的深度AI新闻和分析