Deploy DeepSeek‑R1 on Your Server in 15 Minutes with Zero Code
This guide shows how to use the lightweight OpenStation platform to install, configure, and launch the DeepSeek‑R1 large‑model on a personal server in under 15 minutes, covering zero‑code deployment, resource management, inference engine selection, and integration with CherryStudio.
What is OpenStation?
OpenStation is a lightweight large‑model deployment platform designed for fast, zero‑code deployment and management of AI models. Its main features include:
Zero‑code deployment : No programming required; deployment is done through a web UI.
Standard API : Provides an OpenAI‑compatible interface for easy client integration.
High‑performance inference engines : Supports SGLang, vLLM, and both single‑node and distributed setups.
Resource management & load balancing : Quickly add or remove nodes and automatically balance traffic.
User permission control : Built‑in API‑key authentication for access management.
Quickly Deploy DeepSeek‑R1
Step 1: Install OpenStation
OpenStation offers online and offline installation. For the online method, run the following commands on the server terminal:
curl -O https://fastaistack.oss-cn-beijing.aliyuncs.com/openstation/openstation-install-online.sh bash openstation-install-online.sh --version 0.6.3During installation, provide the server IP address and indicate whether the node has a GPU. After the process finishes, open http://YOUR_SERVER_IP:32206 in a browser; the OpenStation login page confirms a successful install.
Step 2: Deploy DeepSeek‑R1 Model Service
Log in to the OpenStation UI, navigate to Model Service → New Deployment , and select the DeepSeek‑R1 model from the built‑in library (or import it manually). Choose the deployment nodes:
Single‑node deployment: select one node.
Multi‑node deployment: select multiple nodes; the platform automatically applies tensor‑parallel and pipeline‑parallel strategies.
Select an inference engine based on the hardware:
Single‑node GPU – recommended engine: SGLang .
Multi‑node GPU – recommended engine: vLLM .
CPU‑only nodes – vLLM (CPU‑only) is supported.
After confirming the configuration, click Submit . OpenStation will provision the service, and upon completion it generates an API endpoint and an API‑key.
Step 3: Monitor Deployment and Use the Service
On the Model Service page you can view deployment progress and logs. When the status shows completed, the platform provides the API URL and the API‑key. These credentials can be used directly with client tools such as CherryStudio or ChatBox to start querying the DeepSeek‑R1 model.
Integrate DeepSeek‑R1 with a Local Knowledge Base (CherryStudio)
After the model is running, CherryStudio can call it as a backend. The integration steps are:
Add a new provider in CherryStudio, enter the provider name, and confirm.
Enter the API‑key and API address obtained from OpenStation.
In the model selection dialog, choose OpenStation and click Add to complete the model registration.
For detailed configuration, refer to the official OpenStation user manual and the CherryStudio integration guide:
https://gitee.com/fastaistack/OpenStation/blob/master/docs/OpenStation用户手册.md https://gitee.com/fastaistack/OpenStation/blob/master/docs/OpenStation对接CherryStudio、Chatbox配置指南.mdLiangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
