Behind Manus's Wild Popularity: How Agentic AI Builds Lasting Competitive Advantages

Manus has been making waves on the Chinese internet since its recent release, quickly winning over a wide range of users. After diving deeply into the product, I find it truly inspiring. It captures a crucial aspect in the competition among Agentic AI products: the compound effect. In this article, I want to explore, from a longer-term perspective, the key factors that drive competition for Agentic AI products like Deep Research, Cursor, or Manus, and which elements can form a true moat (and which cannot).

Before discussing the three main aspects of Agentic AI product competition, I want to explain why I think Manus stands out. Contrary to the hype portrayed in some media, Manus is neither a mind-blowing creation that popped out of nowhere, nor the first to try this kind of product format. On the contrary, it has a very clear lineage.

It's closely related to two types of existing products. One is Agentic research tools, such as Gemini, Perplexity, and OpenAI's Deep Research. These allow you to enter a simple topic or request and then help you research across the web, producing a detailed and in-depth report. The other type is Agentic generation tools, such as Cursor, Devin, or Gamma. You give them a request, and they can help you write code, produce a document, or create presentation slides—essentially delivering the final form you need. In 2024, both kinds of products made huge strides, crossing the threshold of basic usability and going viral. Still, one big pain point remained: you either do research or you do code, but there's no effective linkage between the two.

This pain point is subtle, because Agentic AI's core appeal lies in its ability to complete a complex task end-to-end through self-iteration and autonomous decision-making. If, in practice, you still have to think, "I'll use OpenAI's Deep Research for this," then copy the result into Cursor to generate a visualization, and finally throw both pieces into Gamma to create a PPT, you're basically going against the very idea of Agentic AI—and you lose most of its value.

Manus is striking because it bridges that entire workflow. On one hand, it can conduct research in an Agentic manner by browsing the internet and gathering a comprehensive set of materials. On the other, based on these materials, it can carry out further analysis and visualization to produce the final output—be it a website, a text-based report, or a slideshow. This end-to-end application scenario is extraordinarily difficult to achieve in previous products. Moreover, Manus is finely polished and impressively complete as a product, making it both precise in concept and highly user-friendly—hence the instant, explosive popularity.

The Compound Effect of Tools

None of this, however, really touches on the deeper, more essential features or advantages of Agentic AI. One standout characteristic is that Agentic AI exhibits compound effects in multiple dimensions.

In Manus's case, one important reason for its success is that it can leverage a greater number of tools than previous products. That might sound trivial, but it isn't. In Agentic AI products, going from being able to use six tools to being able to use eight is far more significant than going from using two tools to four. This is because the tools used by AI can interact and reinforce each other. If an AI can only write code and search text, adding a new image search function might not be all that helpful. But if it can already write reports and generate slides, then adding an image search function suddenly makes its output far more vivid, substantially enhancing the product experience. That's exactly the approach Manus takes. Even if we ignore all its other innovations and just focus on the way it merges Deep Research and Cursor, the simple increase in the number of tools immediately opens up scenarios that earlier products couldn't handle.

This is the first kind of compound effect in Agentic AI. When we expand the number of tools it can call upon, the benefits multiply in a near-explosive fashion. It's an extremely direct way to enhance the user experience. However, it doesn't necessarily provide a durable moat. With tools like Cursor around, simply integrating a specific tool so an AI can autonomously call it is not difficult. Setting aside the finer points of good product design, cloning a Manus-type product is not especially challenging. And relying on the quantity of tools alone to build barriers to entry is not a long-term strategy.

The Compound Effect of Data

Agentic AI has a similar compound effect in other areas, too. One that's often overlooked is data. But here I'm not referring to data in the sense of training an LLM—for example, you used 2T tokens, I used 3T, and now I'm better than you. In the Agentic AI era, data has a deeper meaning. Specifically, it isn't just about the quantity of data but about acquiring, organizing, and externalizing data across its entire lifecycle.

Working side by side with humans, we see a certain phenomenon: having a seasoned veteran on the team is like having a secret weapon. In a factory, a senior mechanic might know exactly where to tap a failing machine to get it working again, while a new graduate has to run all sorts of tests for a rough understanding of the issue. A veteran doctor can make diagnoses by feeling a patient's pulse, whereas a junior doctor may require multiple lab tests for a similar conclusion.

That's a classic example of the advantage gained from accumulating data over time. For humans, two key things are happening: the accumulation of experience—decades of encountering similar machine failures or medical cases—and the organization of that knowledge. At that point, the knowledge is internalized in their memory, which is enough for most human experts. However, because humans still rely on written language when communicating with AI, there's a further stage of externalizing that knowledge, turning it into clear documentation that AI can use.

Taking Manus as an example, both Manus and Devin have a feature where they record your corrections as lessons in their knowledge base. These lessons become data with compound effects. For instance, let's say we ask Manus to create a PowerPoint presentation, and it initially uses blue as the theme color. But within our company, our standard theme color is green, so we correct it: "Whenever you create internal company presentations, please use green as the theme color." Manus records this in its database. This is crucial because it's a form of tribal knowledge. The company might not have any official documentation specifying what color to use for presentations, but because everyone uses green, it's become an unspoken convention.

If we make many such corrections over time, Manus effectively becomes an "insider," developing a shared understanding with us and naturally following our unwritten customs. Now imagine a competitor's product arrives—it might have better AI and make more attractive presentations, but when it produces work with the wrong colors, unsuitable fonts, and no awareness of the CEO's preferences, this new product will naturally be at a disadvantage in the competition. Manus has established a natural moat.

For Agentic AI, maintaining this cycle of knowledge accumulation, organization, and externalization is vital. Consider the software engineering example we mentioned earlier: if you give a code-writing AI a repository with a hundred thousand lines of code and assign it some tasks, the likelihood of it successfully completing all tasks in one go is fairly low. But if you give it time to gradually read through the code, comprehend it, categorize it, and transform what it learns into concise documentation, its coding work becomes significantly easier.

Here is an example of such documentation. It describes the basic architecture, design concepts, and which functions live in which files. For older projects, we can add more historical context. With these documents in place, spatially, the AI knows how to precisely locate code that needs modification rather than blindly creating new files and writing everything from scratch. Temporally, it knows what approaches have been tried before and the current design philosophy, avoiding circular rework that reintroduces previously abandoned solutions. Therefore, the "data" we're referring to isn't simply about accumulating tokens, but a long-term process of automatic or semi-automatic gathering, understanding, and consolidation. For a specific client, the longer an AI works with them, the more such knowledge it can accumulate. Even if another AI with higher intelligence arrives, users might still find the experienced AI more comfortable to work with—it understands them better. This secondary processing of knowledge systems creates an effective moat.

Similarly, this data accumulation has its own combinatorial compound effect. With more historical data and summarized documentation, AI can form more insights through comparison and reflection. In a sense, this is the process of transforming a traditional knowledge system into an AI-friendly one. Being "AI-friendly" isn't a binary state but something that needs time to develop and mature. I'd even compare it to co-evolution between humans and nature. On one hand, AI mines, refines, and accumulates knowledge from the original information base. On the other hand, users increasingly realize that making various data easily accessible to AI greatly benefits their own work. Consequently, they become more willing to adapt their work methods to accommodate AI's data management processes. This brings additional benefits—for example, tribal knowledge previously lost in Zoom meetings can now be captured if users adopt Zoom AI Companion, which preserves this knowledge in documentation that AI can use to assist them. This creates a positive feedback loop of mutual adaptation. This mutual understanding and compatibility forms an extremely strong moat.

The Compound Effect of Intelligence

Agentic AI also has another fascinating aspect: intelligence itself can exhibit compounding effects. It's less obvious at first glance compared to tools or data, but a tool's level of intelligence influences the user's Agentic experience in multiple ways.

On the most basic level, a smarter AI can better understand user needs and knows how to combine a few tools to achieve maximum benefit. A less-intelligent LLM might waste time calling tool after tool yet still not acquire enough information. A more thoughtful LLM, on the other hand, has a clear process and can solve problems quickly with only a few carefully chosen tools.

A related factor: comparing something like Gemini to OpenAI's Deep Research, you see a completely different caliber of AI. Gemini feels like it's mechanically following a predetermined script, starting with certain keywords, searching the web, then deciding which pages to scrape, and ultimately summarizing the content. Deep Research, by contrast, feels much more proactive and better at self-iteration. It starts by formulating a plan, uses different search keywords accordingly, and may dynamically adjust its strategy based on whatever it finds. The final results tend to be more enlightening, not just answering your questions but offering new perspectives or research directions. This capacity for autonomous thinking yields a nonlinear boost in value.

Given that only a few companies can truly develop their own LLMs and that training them requires ample resources and capital, intelligence can also serve as a meaningful barrier to entry.

Key Drivers of Competition

It's important to note that these three compound effects don't merely add up; they amplify each other. When the number of tools expands, that creates more avenues for data processing and accumulation—spanning project management, searching, and documentation, each yielding data for the AI to learn from. Meanwhile, as the AI processes all this information, it refines its understanding and reasoning abilities. We can see this synergy clearly in Manus. At first, Deep Research could conduct in-depth investigations, and Cursor could write code or produce documents. But once you bring them together on a single Agentic platform, the AI can go from researched information to logic, presentation, and final publication in one seamless flow. Within this closed loop, tools, data, and intelligence stimulate each other, delivering a more fluid and sophisticated end-to-end experience.

That's why the crucial question in Agentic AI competition is how to expand quickly enough to reach the right side of the exponential growth curves involving tools, data, or intelligence. In the early phase, investing effort in increasing the number of tools or the amount of data yields only moderate returns. But once you hit a tipping point, exponential growth really kicks in. On the right side of the curve, each new tool or added piece of data can transform the user experience in a major way. This is how Agentic AI products compete and build their moat. Of course, the exponential curve won't keep going forever. It might resemble an S-curve, where benefits level off at some point as complexity and resource constraints pile up. There might also be bottlenecks at certain stages that require deeper architectural or organizational innovations to keep the system evolving in synergy.

From this perspective, building a moat around tools alone is not very reliable. Building a moat around LLM intelligence demands a lot of resources. And building a moat around data might be the simplest and most feasible approach. Beyond data accumulation itself, what's perhaps more important is the methodology and process of how that accumulation occurs. Data itself can be copied, but systematically externalizing tacit knowledge, creating structured repositories, and managing data efficiently are extremely difficult to replicate. This is similar to corporate culture—once an organization develops powerful methodologies and processes for data management and knowledge externalization, competitors may copy the data and tools but will struggle to replicate this implicit organizational capability in the short term. Therefore, in the long-term competition of Agentic AI products, the most impenetrable aspect isn't the scale of data or intelligence, but the systematic organizational capability around data and tool usage.

Conclusion

Starting from Manus, we've explored the key competitive factors and potential moats in the Agentic AI domain. But more importantly, this competition isn't simply about racing to add more tools or accumulate ever-larger volumes of data. It's about how organizations adapt to the AI era on a deeper, structural level. The eventual winners might not be the companies with the strongest technologies per se, but those that truly grasp how AI and humans can co-evolve—and can develop long-lasting, stable collaboration mechanisms. That, in my view, is the real inspiration that Agentic AI brings us.

Comments