AutomationAgenticAI

Did Manus AI Survive the Hype? A Deep Dive into the Agentic AI Taking on OpenAI and DeepSeek

The AI agent shook the industry with bold claims, investor buzz and an early-access frenzy, but does it really deliver on its lofty promises?

In recent years, generative AI technology has dominated headlines, fueling a tech frenzy that led to a global AI arms race and what some call an industrial AI revolution. But now, the spotlight is shifting towards agentic AI—a new breed of models designed to act autonomously, making decisions and executing tasks with minimal human input. These systems rely on Large Language Models (LLMs) to understand context and complete complex workflows. 

The buzz around agentic AI was front and center at Nvidia’s 2025 GTC conference, where CEO Jensen Huang called it the “next major disruption” in the industry.

Just days after its reveal, a new AI agent called Manus sparked both excitement and skepticism across the tech world. Marketed as a “general AI agent that turns your thoughts into actions,” Manus is gaining traction for its ability to complete complex tasks autonomously—outperforming leading agentic AI models like OpenAI’s Deep Research and DeepSeek’s R1.

The agentic AI has sparked widespread discussion across the AI community. The head of product at Hugging Face called it “the most impressive AI tool I’ve ever tried,” while AI policy researcher Dean Ball described it as “the most sophisticated AI system to date.” The Manus Discord server has exploded to over 138,000 members, and invite codes are reportedly selling for thousands of dollars on Chinese resale platforms. 

In a viral video, Yichao “Peak” Ji, chief scientist at Manus AI, showcased its capabilities, calling it “more than just another chatbot.” He described Manus as a fundamental shift in human-machine collaboration, potentially bringing AI one step closer to artificial general intelligence (AGI). Ji also claimed that Manus outperforms OpenAI’s Deep Research on the GAIA benchmark, a test that evaluates an AI’s ability to autonomously complete real-world tasks. 

Unlike DeepSeek’s R1, which has faced criticism over limited research depth and censorship issues, Manus promises broader and more dynamic functionality. But is it the game-changer it claims to be, or just another AI model riding the hype wave?

“Manus AI is a compelling AI tool, but like any major AI launch, there is a fair amount of marketing involved. The idea of an autonomous AI agent is not revolutionary,” Arnie Bellini, co-founder and former CEO of software development company ConnectWise, told me. “What’s concerning is its lack of transparency. That makes Manus more of a black box than a breakthrough.”

On its website, Butterfly Effect – the Chinese startup behind Manus – claims the AI can handle tasks as diverse as real estate transactions, investment research, and game development. Despite its lofty promises, the company has yet to release detailed technical papers or open-source any of its code, making independent verification difficult.

“In today’s world where AI is increasingly impacting our lives, a lack of transparency is a deal-breaker. It erodes trust. Manus’s closed-source approach is a major red flag,” Serena H. Huang, AI adviser at Fortune 100 companies and a former big tech analytics leader, told me. “Companies that prioritize secrecy over openness will find it increasingly difficult to gain adoption, especially in sensitive and highly regulated fields like finance or healthcare.”

A Bold Experiment or an Unstable Bet in the Agentic AI Race?

Butterfly Effect has reportedly raised over $10 million from major venture capital firms like ZhenFund and HongShan, to develop the AI model. Unlike its competitors, Manus runs in the cloud, allowing users to step away while it processes tasks. In contrast to DeepSeek and OpenAI, which have developed proprietary AI models, Butterfly Effect has taken a different approach. The agentic AI model is an integration of over 29 specialized AI toolkits, including Anthropic’s Claude 3.7 Sonnet, open-source browser control software, and multi-cloud processing (MCP) servers.

“Manus is part of a broader strategy by the Chinese Communist Party to dominate AI on a global scale. The West needs to recognize this for what it is: a strategic, government-backed push to control AI development,” asserts Bellini. 

Moreover, many early testers reported encountering frequent errors and infinite loops while using Manus. Others have noted inconsistencies in its factual accuracy and a lack of reliable citations—common pitfalls for AI models attempting complex reasoning tasks. Autonomous AI agents are widely seen as the next frontier in artificial intelligence, but experts remain divided on their potential impact.

“The idea of full AI autonomy, where systems operate completely independently without any human intervention, would be dangerous in the near term,” Serena added. “We need to keep humans in the loop to validate outputs, identify biases, and ensure that AI systems are aligned with our goals.”

A recent paper from Hugging Face warns that unchecked AI autonomy could lead to catastrophic errors, emphasizing the need for human oversight in high-stakes applications. While Manus has captured the industry’s attention, it remains in early-access testing, and its real-world performance has yet to be fully evaluated.

“The risk of fully autonomous systems misaligned with company policies—can result in outcomes that are either ROI-negative or outright damaging. This is why the bar for enterprise adoption of AI systems with no human in the loop is extraordinarily high,” Vered Horesh, Chief of Strategic AI partnerships at visual gen AI platform Bria, told me. “Either the system meets the threshold of full reliability, or it has zero value. A cake baked with only 78% of the ingredients isn’t ‘almost’ a cake—it’s inedible.”

The Bigger Picture

Manus represents a significant step forward in AI-driven automation. While it may not be a DeepSeek moment, it challenges the assumption that only massive proprietary AI systems can push the industry forward. The strategic combination of AI toolchains allows Butterfly Effect to deliver impressive functionality without the need for a $61 billion foundation model. For now, the real race isn’t just about who builds the best model; it’s about whose values shape the future of AI.

“Over the next five years, we will likely see AI agents augment decision-making rather than replace it. The bigger shift will be in how AI agents help optimize processes—filtering information, identifying patterns, and proposing actions—while leaving ultimate decisions to humans,” added Horesh. “Currently, we’re not ready to hand over the reins entirely, and we probably never will be.”

Author

  • Victor Dey

    Victor Dey is a tech analyst and writer who covers AI, data science, startups, and cybersecurity. A former AI editor at VentureBeat, his work also appears in New York Observer, Fast Company, Entrepreneur Magazine, HackerNoon, and more. Victor has mentored student founders at accelerator programs at leading universities including the University of Oxford and the University of Southern California, and holds a Master's degree in data science and analytics.

    View all posts

Related Articles

Back to top button