March 22, 2026
If you develop software in China, starting March 15, 2026, applying for software copyright registration (commonly known as “ruanzhu”) requires you to hand-copy a pledge on the application form: you did not use AI to write code, draft documentation, or generate application materials. Sign your name, attach your national ID number. If the declaration is later found to be false, you will be placed on a copyright registration dishonesty list and the record will be entered into your personal credit file.
This article attempts to answer the following questions: what this rule actually changes, what specific problem it is trying to govern, why it conflicts with using Copilot to write code, and how the situation might evolve from here.
If you have never filed for ruanzhu, here are a few things you need to know.
Software copyright registration is an administrative service provided by the Copyright Protection Center of China (CPCC). It is not the copyright itself (copyright arises automatically upon creation) but rather an official certificate proving that you own the copyright to a particular piece of software. This certificate has real commercial value in several scenarios.
The first scenario is listing Android apps. In China, major Android app stores—Huawei, Xiaomi, OPPO, vivo—all require developers to submit a ruanzhu certificate before an app can be listed. Xiaomi’s developer platform explicitly states that “the software copyright certificate must match the name of the published application.” The iOS App Store does not require ruanzhu, but Chinese Android markets account for over 70% of mobile distribution, and bypassing Android is usually not commercially viable.
The second scenario is fundraising and due diligence. In investment transactions, the ruanzhu certificate is part of a company’s IP asset inventory. Investors’ legal teams will verify whether your software copyright ownership is clean. Global Law Office, in an analysis of AI company investment due diligence, noted that “clarity of IP ownership, technological independence, and potential dispute risks directly affect a company’s compliance and going-concern capability.”
The third scenario is High and New Technology Enterprise (HNTE) certification. This is a tax incentive program: certified enterprises can reduce their corporate income tax rate from 25% to 15%. One of the certification requirements is that the enterprise must hold a certain number of intellectual property rights, and 10 ruanzhu certificates can satisfy this requirement. This condition has created a steady market demand: some companies file for ruanzhu not because they have software to protect, but to accumulate 10 certificates for the tax benefit.
These three points explain why ruanzhu registration volumes keep growing year after year. In 2025, nationwide ruanzhu registrations reached 3.18 million, a year-on-year increase of 12.58%. This growth is not entirely driven by actual software development activity.
On March 15, 2026, the Copyright Protection Center of China released a new version of the software copyright registration application form and simultaneously announced the establishment of a copyright registration integrity system. According to comparisons between the old and new forms compiled by the Shenzhen and Anhui Software Industry Associations, the new form has two key changes.
The first is a hand-copied pledge added to the signature page. The original text reads: “This software was indeed independently developed by a human being, without the use of AI to develop or write code, draft documentation, or generate registration application materials.” Immediately following: “In the event of any misrepresentation or fraud, I voluntarily accept placement on the copyright registration dishonesty list and entry into my personal credit record, and accept all legal liabilities and consequences arising therefrom.” The handler must sign and attach their national ID number.
The second change is that the software’s main function description has been expanded from a maximum of 200 characters to a required range of 500 to 1,300 characters.
These two changes point toward the same objective: making templated, mass-produced applications harder to submit. The hand-copied pledge adds friction to automated filing, the ID binding pushes legal liability from the organization down to the individual, and the substantially longer function description requires the applicant to demonstrate specific knowledge of the software.
One information limitation should be noted: as of the writing of this article, the full text of CPCC’s original announcement does not have a stable, publicly accessible link. The information above comes from a repost by the Hubei Provincial Copyright Protection Center, reporting by The Paper (Pengpai News), and interpretations by the Shenzhen and Anhui Software Industry Associations. The content from these multiple independent sources is highly consistent, lending it reasonable credibility.
Understanding this rule requires looking at the timeline and the causal chain behind it.
In July 2025, the National Copyright Administration issued the “Opinions on Accelerating High-Quality Development of the Copyright Sector,” calling for “unified registration standards, standardized registration processes, and improved registration quality.” On February 14, 2026, the Copyright Protection Center held a special meeting to study and deploy governance of “abnormal software copyright registration applications.” This news was made public on February 25. The new application form took effect on March 15. From deployment to implementation took roughly one month. Within the searchable record, no public comment period for the new application form has been found. By comparison, the governance of “abnormal applications” in the patent field began deployment in 2021 and took approximately three years before formal implementation in 2024.
Multiple sources point clearly to the same immediate trigger: agencies using AI tools to mass-generate ruanzhu application materials. A repost from the Tianjin Software Industry Association stated that “some agencies, using ‘low price’ and ‘100% certificate guarantee’ as selling points, use AI to mass-generate templated materials, causing a flood of ruanzhu applications that disrupts normal review procedures.” The North America Intellectual Property Report listed four categories of targeted behavior: bulk padding applications, abuse of AI to generate fabricated materials, template-based ghostwriting and source code plagiarism, and repeat registrations with malicious stockpiling to extract subsidies.
The deeper logic works as follows. Software copyright registration has long operated under “formal review,” meaning the examiner only checks whether your materials are complete and properly formatted, without verifying whether your software actually exists or whether the code was really written by you. This review approach functioned reasonably well in the past because fabricating a realistic-looking set of ruanzhu materials required a certain investment of labor and time. But after the widespread adoption of AI coding and documentation tools in 2024–2025, that cost was driven to an extremely low level. An agency could use ChatGPT or DeepSeek to mass-generate code and documentation in a very short time, each set looking properly formatted and substantively complete, making it difficult for formal review to distinguish.
Meanwhile, the HNTE certification arbitrage demand mentioned earlier has persisted: companies need 10 ruanzhu certificates to meet the certification requirements, and some choose to mass-file through agencies rather than base their applications on actual development activity. AI tools have made this pipeline faster and cheaper. The first batch of HNTE certification applications for 2026 falls in June, which also explains why the tightening needed to land by March.
The Copyright Protection Center faces the following situation: the formal review system itself cannot be upgraded to substantive review (which would require an entirely different level of staffing and process investment), but inaction would allow false applications to continue flooding in. Under this constraint, the chosen approach is to require applicants to make a personal pledge, shifting verification responsibility from examiners to applicants themselves, and raising the expected cost of false declarations through dishonesty penalties.
This logic is understandable. But the problem lies in how the pledge text is worded.
The new signature page requires applicants to confirm two things simultaneously: that the software was “independently developed by a human being” and that the applicant “did not use AI to develop or write code, draft documentation, or generate registration application materials.” This wording ties the two together, as if using AI means it was not independently developed.
But at the legal level, these two judgments point to entirely different questions.
“Independent development” is a declaration about authorship and ownership. It means: this software was developed by you (or your team) as the principal creator, rather than being plagiarized from someone else or ghostwritten by a third party and claimed as your own. This declaration is concerned with “who is the author.”
“Did not use AI” is a factual statement about development tools and process. It is concerned with “whether you used a particular category of tool during development.”
A developer can perfectly well develop software independently while using Copilot for code completion, ChatGPT to help draft initial documentation, and then making substantial modifications, selections, and integrations of all AI output. In this scenario, independent development holds (the software was indeed led to completion by this developer), and AI usage is also a fact. The two do not contradict each other.
Chinese courts have already taken a clear stance on this. Since November 2023, the Beijing Internet Court and other courts have confirmed in their rulings that content generated with the assistance of AI can still constitute a copyrightable work under copyright law, provided it reflects the human user’s “original intellectual contribution.” Zhonglun Law Firm’s analysis of rulings from multiple jurisdictions noted that judicial decisions have established a relatively permissive standard for “intellectual contribution.” A theoretical article by the Supreme People’s Procuratorate in April 2025 also confirmed this position: “Current adjudication practice generally holds that where generated content reflects the user’s substantial intellectual contribution and possesses original expression, it may constitute a work protected under copyright law.”
In other words, courts consider AI to be a tool. Using a tool does not equate to losing authorship status; what matters is whether the human exercised sufficient control over and contribution to the final expression. But the application form’s text takes a different direction: it does not ask how much you contributed or how you used AI—it only requires you to make a binary factual determination: you used it or you didn’t.
This transforms the question facing applicants from “does my software meet originality requirements” (a judgment about quality and contribution) to “did I use any AI tool at any stage” (a factual true-or-false declaration). The latter carries much higher risk, because it does not consider proportion or contribution, and a false declaration is directly linked to dishonesty penalties.
In 2026, when AI programming tools have become everyday infrastructure, the “did not use AI” standard creates a very wide gray zone.
Consider the following scenarios: you have Copilot’s autocomplete enabled in VS Code, it suggests a line of code, you look at it, decide it’s correct, and keep it. Does that count as “using AI to develop or write code”? You use ChatGPT to generate an initial draft of technical documentation, then spend two hours rewriting it into the final version. Does that count as “using AI to draft documentation”? You use an AI tool to generate test cases, but that test code does not appear in the final submitted identification materials. Does that count?
Strictly reading the literal text of the declaration, all of the above scenarios could be interpreted as “having used AI.” But what the rule actually aims to crack down on is agencies using AI to generate entire sets of application materials from scratch. The problem is that the declaration text itself does not draw this distinction.
Data from the China Software Industry Association in 2025 shows that AIGC technology adoption in areas such as code generation has exceeded 60%. GitHub Copilot has over 1.8 million users. Given this industry reality, requiring all applicants to declare that they “did not use AI” means that a large number of developers who legitimately use AI-assisted tools must make a choice: sign a pledge that does not match the facts, or give up software copyright registration.
Currently, the Copyright Protection Center has not published any standard for distinguishing between “AI-assisted” and “AI-generated,” nor has it provided an alternative pathway on the signature page for applicants who honestly disclose their AI usage. Prior to this, in 2025, the practice of requiring an “AI compliance statement” for AI-involved software had already emerged, and some agency platforms had provided corresponding templates. But there is a directional tension between this earlier practice and the new signature page’s “did not use AI” pledge, and no official guidance on how the two connect.
The downstream impact of this rule can be examined through several specific scenarios.
The registration process. The most direct consequence is increased compliance difficulty. In 2026, the supplementary correction rate for ruanzhu reviews has exceeded 60%, rejection rates have risen over 40% year-on-year, and the application cycle has stretched from approximately 3 months to 4–5 months. The new application form’s AI pledge clause will further amplify this trend. For developers who have used AI assistance, in the absence of a clear alternative pathway, the registration process is in a state of uncertainty.
Android app listing. Because Android app stores have a hard dependency on ruanzhu certificates, if a newly developed application that used AI assistance cannot complete ruanzhu registration without making a false pledge, its listing process on the Chinese Android market will be blocked. Developers may be forced to distribute only through iOS or overseas markets, but that means abandoning China’s largest mobile app distribution channel.
Fundraising due diligence. After the new rules take effect, investors’ legal teams will flag the consistency between the AI declaration at registration and the actual development process as a key verification item. If a company is found to have declared “did not use AI” while actually making extensive use of AI assistance in development, this constitutes a material risk exposure in due diligence and could lead to closing condition disputes or valuation adjustments. Investment agreements may also include new clauses requiring the target company to make specific warranties regarding the truthfulness of its ruanzhu declarations.
Commercial licensing and infringement disputes. The pledge on the signature page constitutes a written self-admission by the applicant to the national copyright registration authority. If a competitor can prove in litigation that you actually used AI but declared otherwise, your ruanzhu registration may face the risk of being revoked, and you may bear additional legal liability for misrepresentation. There is no case law on this yet, but the rule design does create this attack surface. The new rules also introduce a temporal asymmetry: ruanzhu registered before March 15, 2026 are not subject to the new declaration requirements, while those registered afterward are. Older registrations may be challenged on their development process in litigation but lack the declaration constraint, while newer registrations face the burden of proving the truthfulness of their declarations.
Developer behavior. After the rule takes effect, three choice paths emerge: sign a pledge that does not match the facts to obtain registration, bearing the potential risk of dishonesty penalties; refuse to sign, giving up ruanzhu registration and its associated commercial benefits; or seek an as-yet-uncertain alternative compliance pathway. When enforcement intensity is unclear, some will choose the first path. This creates a perverse incentive: honest people exit or worry, while those willing to take risks learn how to erase traces of AI usage (the industry calls this “human-wash”), ultimately undermining the credibility of the declaration system itself.
Compliance cost allocation. Maintaining complete development records (Git history, version logs, AI usage records), preparing human authorship proportion statements, and handling supplementary corrections—these costs are manageable for large enterprises with legal teams but impose a real burden on independent developers and small teams. If the rule is strictly enforced, it may widen the efficiency gap between the two types of entities: large companies continue using AI and manage the risk through compliance infrastructure, while small teams reduce their AI tool usage out of compliance anxiety.
On the question of copyright ownership for AI-assisted creations, there is a strong consensus across major jurisdictions worldwide: purely AI-generated content without substantial human contribution does not constitute a copyrightable work. Chinese courts share this position. The divergence lies in the different implementation paths each country has chosen.
The United States follows a “disclose and disclaim” path. Applicants must disclose AI usage when filing for copyright registration, describe the human-authored portions on the form, label the AI-generated parts, and disclaim copyright over those portions. The Copyright Office will not refuse registration simply because you used AI, as long as the human-contributed portion meets originality requirements. If, after receiving registration, undisclosed AI content is discovered, a supplementary registration correction is required; intentional concealment may lead to cancellation of the registration. This is a mechanism based on information transparency and post-hoc correction.
The European Union follows a path that “places the burden on AI providers.” The AI Act, which took effect in August 2024, requires providers of general-purpose AI models to publish summaries of their training data, and AI-generated content must be marked in a machine-readable manner. The EU does not have a unified copyright registration system (copyright arises automatically), nor does it require creators to declare whether they used AI. Transparency obligations primarily bind the providers of AI systems, not the developers who use AI tools.
The United Kingdom has a unique institutional legacy. Section 9(3) of the Copyright, Designs and Patents Act 1988 provides that for “literary, dramatic, musical or artistic work which is computer-generated,” the author is deemed to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” This means UK law reserved the possibility of protecting purely computer-generated works as early as 1988. The UK has no mandatory copyright registration system and no AI usage declaration requirement.
Japan has adopted the most permissive stance globally on training data. Article 30-4 of the Copyright Act permits the reproduction of copyrighted works for data analysis (including AI training), even for commercial purposes. On the output side, Japan still requires human creative contribution, but there is neither mandatory registration nor an AI usage declaration requirement.
By comparison, China’s path has a distinctive feature: it embeds AI compliance review into the origin point of intellectual property confirmation, enforcing it at the administrative registration stage. This has almost no counterpart in other jurisdictions. The U.S. Copyright Office’s disclosure requirement is the closest in form, but the American approach is “tell me which AI you used, then disclaim copyright over the AI-generated portions,” while the Chinese approach is “you must declare that you did not use AI.” One asks for transparent disclosure; the other demands factual denial. This is a directional difference.
Another distinction worth noting is the difference between code and images/text. The landmark cases internationally regarding copyright of AI-generated content primarily involve images and text. Images have identifiable stylistic features (for instance, Midjourney-generated images have a specific aesthetic), and before-and-after comparisons are relatively easy. Code is different: AI-generated code and human-written code are virtually indistinguishable in form. Git can track modification history but cannot automatically label which changes came from a human and which from Copilot. When directly analogizing rules from the image/text domain to code, the evidentiary difficulty is fundamentally different.
The future trajectory of this rule depends on the Copyright Protection Center’s subsequent policy choices and enforcement intensity.
One possibility is that a formal disclosure pathway for AI-assisted development will be added. If the Copyright Protection Center subsequently introduces supplementary templates or alternative signature options allowing applicants to complete registration while honestly declaring their AI usage, then the current absolute pledge can be understood as a transitional measure for an interim period. The practice in 2025 of requiring AI-involved software to submit a compliance statement suggests that the Copyright Protection Center does not completely deny the existence of AI-assisted development. From this direction, the emergence of a supplementary pathway is plausible, but the timeline is uncertain.
Another possibility is that dishonesty penalties are substantively enforced, but the supplementary pathway is slow to materialize. In this scenario, the rule will continue to create an integrity trap: a large number of developers who used AI assistance will either make false pledges or exit the registration system. This will drive large-scale human-wash behavior, ultimately undermining the credibility of the declaration system itself—and the fabricators will not necessarily be the most affected.
A third possibility is that enforcement intensity remains limited and the rule primarily serves as a deterrent. Whether the Copyright Protection Center has the verification capacity to check AI usage case by case is currently unclear. If dishonesty penalties remain at the level of paper deterrence, the industry will quickly develop default evasion conventions, with limited impact on actual development behavior. But even so, the rule’s existence still creates a long-term legal risk exposure: anyone can pull out this declaration in a future dispute as a weapon.
From a longer time horizon, this rule can be understood within a larger trend. The governance of “abnormal applications” in the patent and trademark fields has been in operation for years. The 2019 Trademark Law revision explicitly rejected “malicious registrations not intended for use,” and patent-field regulations took formal effect in 2024. The crackdown in the ruanzhu field is the final step in extending the intellectual property credit system from patents and trademarks to copyrights. From this perspective, the rule’s introduction has a certain institutional inevitability. But whether the specific governance instrument—requiring everyone to declare “did not use AI”—is the optimal choice under current constraints is worth continued attention.
This rule also has positive significance. It can enhance the credibility of ruanzhu certificates: in fundraising, IPO, and technology transfer scenarios, a certificate that has been through a more rigorous review process is genuinely more persuasive than one with no barrier to entry. It can also curb the pollution of registration data by “paper software” at the source. Whether these benefits can offset the compliance costs and integrity risks the rule creates in the gray zone depends on how it evolves from here.
The following points did not have definitive answers at the time of writing, and readers should bear them in mind when forming judgments.
Whether the Copyright Protection Center will introduce a supplementary declaration template or alternative pathway for AI-assisted software has not been officially confirmed. How the 2025 practice of requiring AI-involved software to submit compliance statements connects with the new signature page’s “did not use AI” text also lacks official guidance.
The depth of linkage between dishonesty penalties and the personal credit system, as well as specific implementation details, have not been published. Whether a “self-inspection and self-correction” grace period exists, what standard defines “misrepresentation,” and how penalties escalate—these operationally critical questions are currently blank.
The standard for distinguishing “AI-assisted” from “AI-generated” is entirely absent. IDE-integrated code completion, document polishing, test code generation—what these commonplace development activities mean in the context of the declaration has no guidance whatsoever.
The information in this article regarding the content of the new application form comes primarily from reposts by the Hubei Provincial Copyright Protection Center, interpretations by the Shenzhen and Anhui Software Industry Associations, and reporting by media outlets such as The Paper (Pengpai News). The original announcement from the Copyright Protection Center’s official website could not be directly accessed.
The facts and quoted excerpts cited in this article come from the following categories of sources, listed in descending order of reliability:
Official and quasi-official sources: National Copyright Administration announcements and statistics, Hubei Provincial Copyright Protection Center repost of the CPCC notice, a theoretical article by the Supreme People’s Procuratorate, reporting by The Paper (Pengpai News), U.S. Copyright Office AI policy documents, EU AI Act provisions, Japan Agency for Cultural Affairs guidance.
Industry association and law firm analyses: comparisons of application form changes by the Shenzhen and Anhui Software Industry Associations, research by Zhonglun Law Firm and Global Law Office, analysis by the North America Intellectual Property Report, Quinn Emanuel comparative study of U.S. and China.
Agency platform practical information: guides and data from Zhumeng, Wanghu Ruanzhu, Ruanzhubao, and Ruanzhu Pro. These sources reflect front-line practical operations and are not equivalent to official rules; their nature has been noted where cited.
Specific URLs can be found in the prior research reports
(20260322_软著登记AI声明调研.md,
20260322_软著AI声明_下游影响分析.md,
ai_copyright_intl_comparison_survey_20260322.md).