Testing OpenManus with DeepSeek: A Hands‑On Evaluation

The author walks through installing OpenManus, configuring it to use DeepSeek (and an Ollama‑based vision model), runs a sample financial data query, and reports that the system is slow, sometimes inaccurate, and still requires further optimization.

Infra Learning Club
Infra Learning Club
Infra Learning Club
Testing OpenManus with DeepSeek: A Hands‑On Evaluation

Recently I had time to investigate last week’s hot topic, OpenManus, a community‑built clone of Manus that removes the invitation requirement.

What OpenManus Can Do

Complex task planning and execution: it breaks down intricate tasks into smaller steps and runs them automatically.

Tool invocation and automation: it can call browsers, data‑analysis software, file utilities, terminal commands, etc., without the user handling low‑level details.

Intelligent information collection and processing: it browses webpages, extracts data, and organizes the content precisely.

OpenManus was created by the MetaGPT team, a project focused on multi‑agent frameworks. Core contributors such as Liang Xinbing and Xiang Jinyu have strong AI‑agent backgrounds, having performed well in AI competitions and contributed to projects like the open‑source Devin, which helped them quickly understand Manus and implement a replica.

Installation

I chose the Conda method, which I use most often:

conda create -n open_manus python=3.12
conda activate open_manus

Clone the repository:

git clone https://github.com/mannaandpoem/OpenManus.git
cd OpenManus

Install dependencies:

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

After a successful installation, the LLM API configuration must be edited.

Configuration Details

Copy the example configuration file and modify it with your API keys:

cp config/config.example.toml config/config.toml
# Global LLM configuration
[llm]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."  # replace with real key
max_tokens = 4096
temperature = 0.0

# Optional specific LLM model configuration
[llm.vision]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."  # replace with real key

Because I did not have a gpt‑4o account, I switched to DeepSeek’s chat API. DeepSeek does not provide a public vision model, so I used Ollama with the minicpm‑v model for visual tasks:

# Global LLM configuration
[llm]
model = "deepseek-chat"
base_url = "https://api.deepseek.com"
api_key = "sk-8.....9ff89"
max_tokens = 4096
temperature = 0.0

[llm.vision]
api_type = 'ollama'
model = "minicpm-v"
base_url = "http://ollamahost:11434/vi"
api_key = "ollama"
max_tokens = 4096
temperature = 0.0

Quick Start

Run the program with a single command: python main.py Example prompt used:

"Help me find the 2024 dividend and revenue data for Changjiang Power Company."

The system opened a browser, visited Sina Finance, and captured a screenshot, but the task ultimately failed:

The failure cost about ¥0.20, and the overall experience was that OpenManus runs slowly and produces results that are not very accurate, indicating that further optimization is needed.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AI agentsLLMOpenManusDeepSeekCondaMetaGPT
Infra Learning Club
Written by

Infra Learning Club

Infra Learning Club shares study notes, cutting-edge technology, and career discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.