Hands‑On Guide: Build AI Agent Chatbots on Windows with RagFlow

Programmer Xiao Meng walks through a complete Windows setup for AI‑powered customer service agents using RagFlow, covering prerequisites, Docker and Ollama installation, model download, container deployment, configuration of knowledge bases, and testing, based on five real‑world projects.

SpringMeng
SpringMeng
SpringMeng
Hands‑On Guide: Build AI Agent Chatbots on Windows with RagFlow

1. Introduction

The author, a programmer, recently delivered five projects that required AI agents for e‑commerce, ERP, finance, ordering, and blind‑box systems, replacing manual customer service. This article shows how to build a local Windows smart‑customer‑service solution by integrating a large language model with RagFlow.

2. Prerequisites

CPU >= 4 cores

RAM >= 16 GB

Disk space >= 50 GB

Docker >= 24.0.0 and Docker Compose >= v2.26.1

Docker will later download images and containers, requiring roughly 40 GB of free space.

3. Install Docker

Download Docker Desktop Installer.exe and double‑click to run. Follow the installer, clicking “Next” and “Agree” until completion. Optionally change the installation directory to D:\Program Files\Docker and set the data root to D:\Program Files\Docker\data using the command:

start /w "" "Docker Desktop Installer.exe" install -accept-license --installation-dir="D:\Program Files\Docker" --wsl-default-data-root="D:\Program Files\Docker\data" --windows-containers-default-data-root="D:\Program Files\Docker"

4. Install Ollama and Models

Run the Ollama installer ( ollama.exe) and accept the prompts. Download a large language model and an embedding model:

ollama run qwen2.5:3b
ollama run shaw/dmeta-embedding-zh:latest

Verify the models with ollama list.

5. Deploy RagFlow

Place the RagFlow package in D:\company\ragflow‑v0.22.1\docker. Then execute:

cd D:\company\ragflow‑v0.22.1\docker
docker compose -f docker-compose.yml up -d

Check the logs to confirm the server started: docker logs -f docker-ragflow-cpu-1 A successful start shows a welcome screen (image omitted).

6. Configuration

Open http://localhost/, register, and log in. Create a personal knowledge base, select the downloaded model, and set the system prompt exactly as provided:

你是一个专业的智能助手,专门根据提供的知识库内容来回答用户问题。请严格遵循以下流程:

理解与分析:仔细理解用户当前的问题,并参考对话历史以把握上下文。

知识库检索与匹配:在知识库中检索与当前问题最相关的内容。必须基于知识库的具体信息来组织答案。

判断与回应:
如果知识库中存在相关信息,请直接、清晰地基于这些信息进行回答,并可以适当关联对话历史。
如果经过彻底检索,确认知识库中没有任何内容与当前问题直接或间接相关,则你必须在回答的开头明确包含这句话:“知识库中未找到您要的答案!”。之后,你可以选择保持沉默,或者基于你自己的通用知识进行补充说明(同时提醒用户这并非来自知识库)。
以下是知识库:{knowledge}
以上是知识库。

Save the configuration.

7. Test the Smart Dialogue

Use the web UI to start a chat, customize avatar, name, and opening line, and select the previously created knowledge base. The system now answers queries based on the knowledge base and the LLM.

8. Create APIs and Agents

Generate API endpoints and define different agents as needed for each project.

9. Conclusion

Following this guide, the author successfully built five AI agent projects. The tutorial is fully hands‑on and can be reproduced on any Windows machine that meets the prerequisites.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

DockerRAGlarge language modelWindowsOllamaAI chatbotRAGFlow
SpringMeng
Written by

SpringMeng

Focused on software development, sharing source code and tutorials for various systems.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.