Artificial Intelligence 15 min read

Weekly AI Rumors Issue 15: Manus AI Agent Launch, GPT‑4.5 Evaluation, and LightThinker Technique

This issue reviews the hype around China’s Manus AI Agent and its invitation‑code controversy, critiques OpenAI’s GPT‑4.5 performance versus DeepSeek, showcases industry solutions using AI agents, and introduces the LightThinker method for dynamically compressing LLM inference chains to boost efficiency.

ZhongAn Tech Team
ZhongAn Tech Team
ZhongAn Tech Team
Weekly AI Rumors Issue 15: Manus AI Agent Launch, GPT‑4.5 Evaluation, and LightThinker Technique

Weekly AI Rumors Issue 15

Saturday, March 8, 2025

Table of Contents

1. Market and Voices

2. Industry Solutions

3. Valuable Technologies

Market and Voices

1. China AI product Manus sparks frenzy; partners respond to invitation‑code speculation

On the early morning of March 6, the Chinese AI team Monica released Manus, the world’s first general‑purpose AI agent, instantly attracting massive attention and even causing the registration page to crash.

Manus claims to think, plan, and execute complex tasks autonomously, and the announcement was accompanied by two real‑world case studies (see Section 2).

Invitation‑code price surge and official response

Because Manus was released via invitation‑only testing, many users could not access it, leading to resale prices ranging from ¥999 to ¥50,000 on secondary markets.

Manus partner Zhang Tao posted twice on social media on March 6, clarifying that the company never sold invitation codes nor spent any marketing budget. He explained that limited system capacity during the beta phase prioritized existing users and that the product is still in its infancy, with significant room for improvement in hallucinations, output friendliness, and speed.

Industry outlook and market forecast

With the rise of products like DeepSeek, several securities firms predict 2025 could become the “year of AI agent commercialization,” especially in B‑side scenarios such as e‑commerce, marketing, CRM, finance, and legal services. Continued large‑model iteration may also bring killer applications to the consumer side.

China International Capital Corp notes that the AI industry is shifting from a training‑centric arms race to an inference‑centric commercial cycle, with AI agents reshaping the AI supply chain.

Exclusive insights and slow‑thinking about Manus

Why did the Manus team succeed? Timing, talent, and the “Less Structure, More Intelligence” philosophy gave them agility beyond large corporations.

Is Manus truly the first general‑purpose AI agent? Earlier frameworks (Operator, Deep Research, MetaGPT, AutoGPT, Eko) achieved similar capabilities, but Manus delivered notable engineering optimizations and productization first.

Why is Manus popular despite limited user access? Strong reputation of the team and venture backing, combined with a viral invitation‑code strategy, drove massive hype.

2. GPT‑4.5 crushed by DeepSeek 500× in benchmarks; OpenAI loses its moat

Since OpenAI released GPT‑4.5, opinions have been mixed. This section compiles benchmark results, expert commentary, and market implications.

NYU professor Marcus: “GPT‑4.5 is an empty burger,” criticizing the lack of substantive progress and unchanged reasoning ability.

CEO of an AI startup: “On the Aider Polyglot benchmark, GPT‑4.5 costs 500× more than DeepSeek‑V3 yet performs worse,” suggesting a looming crisis for OpenAI.

Performance vs. price

ARC‑AGC evaluation shows GPT‑4.5 is on par with GPT‑4o, offering no real intelligence gain. Its price is 30× GPT‑4o, 137× DeepSeek‑R1, and 278× DeepSeek‑V3, but the higher cost does not translate into better performance.

Competitors such as DeepSeek, xAI Grok‑3, and Anthropic’s Claude 3.7 Sonnet are rapidly advancing, with DeepSeek cutting R1 prices by 75% after a six‑day open‑source surge.

OpenAI must find new breakthroughs; as Marcus notes, “After spending $500 billion, no viable business model has emerged.”

Industry Solutions

1. The “GPT moment” for AI agents – Manus awakens the AI community

Manus, built by the Monica.im team, is the world’s first general‑purpose AI agent released on March 6. It can think, plan, and execute complex tasks, delivering complete results such as personalized travel guides, stock analysis reports, and educational content.

The core advantage lies in its Multiple‑Agent architecture, which can invoke tools (code execution, web browsing, app interaction) within a virtual environment, and it retains memory of user preferences for improved experience.

Case 1: Resume screening

Manus automatically extracts and ranks candidates for a reinforcement‑learning engineer role from 15 resumes, handling file extraction and page‑by‑page review without human intervention.

Case 2: Tailored real‑estate search for Chinese users

A user wants a New York home with safe neighborhoods, low crime, good schools, and a budget that fits monthly income. Manus decomposes the task, researches safe communities, identifies top schools, calculates affordable price ranges, and scrapes listings.

It also writes a Python script to compute the budget based on income, then filters listings accordingly.

Performance tests on GAIA show Manus surpasses OpenAI’s Deep Research, achieving state‑of‑the‑art results.

The Monica.im team, led by founder Xiao Hong, has a track record of launching popular products and amassing millions of users, reinforcing their leadership in AI.

Valuable Technologies

1. LightThinker: Dynamic inference‑chain compression for LLM efficiency

With the emergence of DeepSeek R1, researchers recognize that giving models more “thinking time” improves answer quality, yet Transformer‑based LLMs face quadratic attention costs and linear KV‑cache overhead for long contexts.

LightThinker proposes dynamically compressing intermediate reasoning steps to boost efficiency without sacrificing accuracy.

Core Idea

LightThinker mimics human cognition by condensing lengthy reasoning into compact representations, reducing token count in the context window, lowering computational cost, and speeding up inference while preserving result fidelity.

Implementation

Data construction: Build an augmented dataset with special split functions and tokens that teach the model when and how to compress.

Output segmentation: Use a Seg() function to split reasoning into sub‑sequences based on token count or semantic completeness.

Special tokens: Insert compression‑trigger tokens, key‑point tokens [c], and output tokens [o] between sub‑sequences to signal compression actions.

Hidden‑state mapping & attention mask: Map hidden states of compressible steps to a few special tokens, and design masks to guide the model to attend to compressed information.

Dependency metric: Introduce a Dependency (Dep) score to quantify reliance on historical tokens; lower Dep indicates effective compression, achieving up to 70% reduction in dependency and a 26% speed‑up.

Conclusion: LightThinker offers an effective solution to the efficiency bottleneck of LLMs on long texts and complex reasoning, improving computation while maintaining accuracy, making it a promising direction for researchers and engineers seeking performance gains.

END

large language modelAI AgentAI MarketGPT-4.5LightThinkerManus
ZhongAn Tech Team
Written by

ZhongAn Tech Team

China's first online insurer. Through tech innovation we make insurance simpler, warmer, and more valuable. Powered by technology, we support 50 billion RMB of policies and serve 600 million users with smart, personalized solutions. ZhongAn's hardcore tech and article shares are here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.