How to Deploy AnythingLLM Locally with Docker for Enterprise Document RAG

This guide walks through setting up a Ubuntu VM, installing Docker, pulling the AnythingLLM image, configuring storage, launching the container, and using it to ingest and query local documents with a DeepSeek‑R1 model.

Tech Stroll Journey
Tech Stroll Journey
Tech Stroll Journey
How to Deploy AnythingLLM Locally with Docker for Enterprise Document RAG

Enterprises accumulate large amounts of documentation that can become wasted resources if not effectively utilized. Leveraging AI large‑model capabilities, AnythingLLM transforms these documents into a searchable knowledge base. This article provides a step‑by‑step tutorial for deploying AnythingLLM locally on an Ubuntu 20.04 VM using Docker.

1. Prepare the Environment

Set up an 8‑core, 32 GB virtual machine (Ubuntu 20.04.5 LTS, kernel 5.4.0‑125‑generic). Install Docker following the official instructions:

# Add Docker GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker stable repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io

Verify the installation with docker --version, which should report a version such as 28.1.1.

2. Pull the AnythingLLM Image

Use a Chinese Docker registry mirror for faster download, then list the image to confirm:

# Pull image
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/mintplexlabs/anythingllm:latest

# List images
docker image ls

3. Configure Storage and Environment

Create a persistent directory and an .env file, then set appropriate ownership and permissions:

mkdir -p /opt/anythingllm
touch /opt/anythingllm/.env
chown -R 1000:1000 /opt/anythingllm
chmod -R 755 /opt/anythingllm

4. Launch the Container

Export the storage location and run the container with root privileges, mounting the storage and environment file:

export STORAGE_LOCATION=/opt/anythingllm && \
docker run -d -p 3001:3001 \
  --user root \
  --cap-add SYS_ADMIN \
  -v ${STORAGE_LOCATION}:/app/server/storage \
  -v ${STORAGE_LOCATION}/.env:/app/server/.env \
  -e STORAGE_DIR="/app/server/storage" \
  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/mintplexlabs/anythingllm

After the container starts, access the web UI at http://<em>host_ip</em>:3001. The interface shows the DeepSeek‑R1 model ready for use via Ollama.

5. Upload and Pin Local Documents (RAG)

With the environment ready, upload documents to a workspace. The process involves three steps:

Upload → Process/Embed (vectorize) → Pin into workspace (associate with the current workspace)

After uploading, the DeepSeek‑R1 model runs on the GPU, and queries such as “What are the emergency plans for the enterprise cloud platform?” return summarized answers drawn from the indexed documents.

The tutorial concludes with a note to explore further AI possibilities.

DockerRAGDeepSeekAI DeploymentDocument RetrievalAnythingLLMUbuntu
Tech Stroll Journey
Written by

Tech Stroll Journey

The philosophy behind "Stroll": continuous learning, curiosity‑driven, and practice‑focused.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.