Build a Local AI Assistant with DeepSeek and Ollama in 10 Minutes
This guide walks you through installing Ollama, downloading the DeepSeek model, and configuring the Chatbox AI client so you can run a powerful local AI assistant on Windows, macOS, or Linux within minutes.
If you want to quickly create a local AI assistant, replace OpenAI with DeepSeek and get a ready‑to‑use solution without complex setup.
1. Install DeepSeek
DeepSeek runs on top of Ollama , an open‑source framework designed for easy deployment of large language models (LLMs) on local machines. Ollama simplifies Docker‑based model deployment and works on macOS, Linux, and Windows. Download it from https://ollama.com/ .
After installing Ollama, search for the DeepSeek model and run the following command in the console:
ollama run deepseek-r1:7bOnce the model is downloaded, your computer becomes a small DeepSeek server ready to answer queries.
If you have multiple models, you can list and run them using
ollama listand
ollama run <model>.
2. Install Chatbox
Chatbox AI is a client application that supports many AI models and APIs, available on Windows, macOS, Android, iOS, Linux, and the web.
Download and launch Chatbox, then set the language to Chinese. In the model settings, select the DeepSeek model you just installed and save the configuration.
After saving, you can start using the various features of Chatbox with your local DeepSeek model.
JD Cloud Developers
JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.