How to Deploy Dify and Ollama Locally on Windows 11: A Step‑by‑Step Guide
This article walks through enabling Hyper‑V on Windows 11 Pro, configuring Docker Desktop with Chinese mirrors, adjusting storage, installing Ubuntu via WSL, cloning and setting up Dify, running Docker Compose, and linking Ollama's LLM so the AI agent runs entirely on a local machine.
1. Enable Hyper‑V
Open Control Panel → Programs → Turn Windows features on or off and check all Hyper‑V options. This feature is only available in Windows 11 Pro.
2. Install Docker Desktop and configure domestic mirrors
After installing Docker Desktop, edit its daemon configuration ( daemon.json) to use faster Chinese registries:
{
"registry-mirrors": [
"https://docker.1ms.run",
"https://docker.kejilion.pro",
"https://docker-0.unsee.tech",
"https://dhub.kubesre.xyz",
"https://docker.tbedu.top",
"https://hub.crdz.gq",
"https://image.cloudlayer.icu"
]
}3. Relocate Docker storage
To avoid filling the C: drive, change the disk image location via Settings → Resources → Advanced → Disk image location in Docker Desktop.
4. Install Ubuntu via WSL
Open PowerShell and run: wsl --install After the download finishes, create a Linux user name and password as prompted.
5. Clone the Dify repository
# Switch to a user directory to avoid permission issues
cd C:\Users\<strong>your-username</strong>\
# Clone from GitHub (or use the Gitee mirror if GitHub is slow)
git clone https://github.com/langgenius/dify.git
# Alternative mirror
# git clone https://gitee.com/langchain-ai/dify.git
cd dify\dockerIf the clone fails, an accelerated link such as https://gitcode.com/GitHub_Trending/di/dify/tags can be used.
6. Prepare the environment file
Copy the example file and change the default port (80) because it is already occupied: cp .env.example .env Edit .env and set PORT=8100 (or any unused port).
7. Launch the containers
Remove stale containers if a previous attempt left them behind: docker compose down -d Start the services: docker compose up -d The startup logs show the containers initializing, and the installation page becomes reachable.
8. Verify the service
Open a browser and navigate to http://localhost:8100/install . The UI shows an init_permissic log entry, which indicates successful startup.
9. Configure Ollama in Dify
In the Dify UI, click the avatar → Settings → Model Provider, locate the Ollama card and press “Install”. Fill in the following fields (values must match the output of ollama list):
Model Name : qwen2.5:7b (exactly as shown by ollama list)
Model Type : LLM
Base URL : http://host.docker.internal:11434 (or http:// your‑IP :11434 for cross‑machine access)
API Key : any string, e.g., ollama-local (Ollama itself does not require a key)
10. Final test
After saving the configuration, the Dify interface can invoke the selected Ollama model. The setup has been verified to work on a Windows 11 Pro machine with RTX 4060 Ti, 32 GB RAM, i7‑10700KF and a 1 TB SSD.
References
Dify official site: https://dify-china.com/
Ollama Chinese community: https://ollamacn.github.io/
AI Large-Model Wave and Transformation Guide
Focuses on the latest large-model trends, applications, technical architectures, and related information.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
