Run AgenticSeek Locally: Complete Guide to a Private AI Assistant

This guide walks you through installing, configuring, and running AgenticSeek—a fully local, privacy‑focused AI assistant—by setting up prerequisites, cloning the repository, adjusting environment files, launching Docker services or CLI mode, and troubleshooting common issues.

Architect's Alchemy Furnace
Architect's Alchemy Furnace
Architect's Alchemy Furnace
Run AgenticSeek Locally: Complete Guide to a Private AI Assistant

Overview

AgenticSeek is a 100% local replacement for Manus AI that provides a voice‑enabled AI assistant capable of web browsing, code generation, and task planning while keeping all data on your own device.

Why Choose AgenticSeek

Fully local and private – no cloud data sharing.

Autonomous web browsing for search, reading, and form filling.

Programming assistant that can write, debug, and run code in multiple languages.

Smart agent selection to match the best tool for each task.

Task planning that breaks complex jobs into steps.

Voice support (currently in development).

Prerequisites

Git – for cloning the repository.

Python 3.10.x – recommended to avoid dependency issues.

Docker Engine & Docker Compose – required for bundled services such as SearxNG.

Clone Repository and Initial Setup

git clone https://github.com/Fosowl/agenticSeek.git</code><code>cd agenticSeek</code><code>mv .env.example .env

Configure .env

SEARXNG_BASE_URL="http://searxng:8080"</code><code># Use http://127.0.0.1:8080 for CLI mode on host</code><code>REDIS_BASE_URL="redis://redis:6379/0"</code><code>WORK_DIR="/Users/yourname/workspace_for_ai"</code><code>OLLAMA_PORT="11434"</code><code>LM_STUDIO_PORT="1234"</code><code># Optional API keys – leave empty for pure local operation

Adjust the values as needed, especially WORK_DIR which points to the directory AgenticSeek can read and write.

Start Docker Services

Ensure Docker is running, then launch the full stack (searxng, redis, frontend, backend) with:

./start_services.sh full   # macOS</code><code>start start_services.cmd full   # Windows

Wait for the backend health check ( GET /health HTTP/1.1" 200 OK) before sending any queries; the first start may take up to 30 minutes.

CLI Mode (Optional)

If you prefer a command‑line interface, install the host packages:

./install.sh   # macOS/Linux</code><code>./install.bat   # Windows

Update SEARXNG_BASE_URL to "http://localhost:8080", then start the minimal services:

./start_services.sh   # macOS/Linux</code><pre><code>start start_services.cmd   # Windows

Run the CLI with:

uv run cli.py

Local LLM Provider Setup

Supported local providers include ollama, lm-studio, and a custom server. Start the provider, e.g.: ollama serve Then edit config.ini:

[MAIN]</code><code>is_local = True</code><code>provider_name = ollama</code><code>provider_model = deepseek-r1:32b</code><code>provider_server_address = http://127.0.0.1:11434</code><code>agent_name = Friday</code><code>listen = False   # enable voice‑to‑text in CLI mode

Key options: is_localTrue for local providers, False for cloud APIs. provider_name – one of ollama, lm-studio, openai (local server), server, or cloud names. provider_model – specific model identifier, e.g., deepseek-r1:32b. provider_server_address – address of the local service; ignored for cloud APIs.

Voice‑to‑Text (CLI Only)

Enable by setting listen = True and choosing an English name for agent_name (e.g., Friday). Speak the name to activate the assistant, then issue your query and finish with a confirmation phrase such as “do it” or “go ahead”.

Known Issues & ChromeDriver Troubleshooting

If the embedded browser fails with a SessionNotCreatedException, ensure the ChromeDriver version matches your Chrome browser. Download the correct driver from the Chrome for Testing page or place a matching binary in the project root as ./chromedriver. Verify with ./chromedriver --version and check Docker logs for the message “Using ChromeDriver from project root”.

FAQ

Hardware requirements : 7 B models need ~8 GB VRAM (limited performance); 14 B models work on 12 GB VRAM (e.g., RTX 3060); 32 B models need 24+ GB VRAM (e.g., RTX 4090); 70 B+ models require 48 GB+ VRAM for best results.

Is it truly 100 % local? Yes, when using Ollama, LM‑Studio, or a self‑hosted server; cloud APIs are optional.

Why use AgenticSeek over Manus? It offers full privacy, no API costs, and greater control over the underlying models.

DockerPythonLLMlocal AIAgenticSeek
Architect's Alchemy Furnace
Written by

Architect's Alchemy Furnace

A comprehensive platform that combines Java development and architecture design, guaranteeing 100% original content. We explore the essence and philosophy of architecture and provide professional technical articles for aspiring architects.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.