How to Build an AI Agent with Ollama: From Model Setup to Knowledge Base

This step‑by‑step guide shows how to create an AI Agent by configuring a local Ollama model, selecting an embedding model, building a knowledge base, uploading documents, and testing the agent's retrieval capabilities, providing a practical RAG workflow for developers.

Architect's Alchemy Furnace
Architect's Alchemy Furnace
Architect's Alchemy Furnace
How to Build an AI Agent with Ollama: From Model Setup to Knowledge Base

AI Agent (AI Agent) bridges large models and business applications: AI Agent = large model + knowledge base + business system API + workflow orchestration.

Step 1: Create the Agent Application

Click "Create Blank Application" on the left, select "Agent", give a name and icon, and finish creation.

Step 2: Configure the Agent Model

Open the created agent, go to "Settings", and enter an API Key for a remote model (e.g., DeepSeek) or choose a local model. Remote API keys may incur costs and expose knowledge‑base data, so a local model is preferred.

In this guide we use the local model Ollama.

2.1 Configure LLM

In the agent settings, click "Add Model" under the model provider, select Ollama, and add the desired model.

You can also add models via the account menu → Settings.

Ensure Ollama is installed and the required models are downloaded. Example command (run in terminal): ollama pull <model-name> If the agent cannot connect to http://localhost:11434 inside Docker, expose the host address by adding an environment variable OLLAMA_HOST set to the LAN IP or 0.0.0.0, then reference %OLLAMA_HOST% in the path and restart Ollama.

2.2 Configure Knowledge Base Embedding Model

Use DeepSeek for reasoning, but for embeddings we choose BGE‑M3, which performs better on Chinese retrieval tasks. Install the model via Ollama search "embedding" and download it.

After saving, the LLM shows DeepSeek‑r1:14b and the text embedding uses bge‑m3.

Both models are now configured.

Step 3: Knowledge Base Operations

3.1 Create Knowledge Base

3.2 Upload RAG Data

Select a data source, segment and clean the text, then process. Supported file types are many, but each file must be under 15 MB.

3.3 Save and Process

Upload a 12.86 MB PDF user manual, use generic segmentation, high‑quality indexing, BGE‑M3 embedding, and hybrid retrieval. Click Save and wait for processing.

After processing, the document appears in the knowledge base.

Step 4: Test the Agent

Open the agent in the studio and run a query. The AI retrieves relevant information from the knowledge base and composes a response.

Next steps include improving the RAG pipeline and integrating the agent into custom systems.

RAGEmbeddingAI AgentOllama
Architect's Alchemy Furnace
Written by

Architect's Alchemy Furnace

A comprehensive platform that combines Java development and architecture design, guaranteeing 100% original content. We explore the essence and philosophy of architecture and provide professional technical articles for aspiring architects.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.