Deploy DeepSeek‑r1 Locally with a One‑Click Ollama Script

This guide walks you through a Bash script that automatically checks for Ollama, installs it if missing, lets you choose a DeepSeek‑r1 model size, starts the Ollama service, and runs the selected model locally, complete with usage examples and a token‑cost note.

Dunmao Tech Hub
Dunmao Tech Hub
Dunmao Tech Hub
Deploy DeepSeek‑r1 Locally with a One‑Click Ollama Script

What are Ollama and DeepSeek?

Ollama is an open‑source tool that simplifies managing and running machine‑learning models via the command line. DeepSeek is a family of large language models (e.g., 1.5B, 7B, 8B, 14B, 32B, 70B, 671B) that can be selected based on required compute and memory.

Script Overview

#!/bin/bash

# Check if Ollama is installed
check_ollama_installed() {
  if ! command -v ollama &>/dev/null; then
    echo "Ollama 未安装,正在安装..."
    install_ollama
  else
    echo "Ollama 已安装。"
  fi
}

# Install Ollama
install_ollama() {
  echo "正在通过 curl 安装 Ollama..."
  curl -fsSL https://ollama.com/install.sh | sh
  echo "Ollama 安装完成。"
}

# Prompt user to select a DeepSeek‑r1 model version
echo "请选择要部署的 deepseek-r1 模型版本:"
echo "1) deepseek-r1 1.5B"
echo "2) deepseek-r1 7B"
echo "3) deepseek-r1 8B"
echo "4) deepseek-r1 14B"
echo "5) deepseek-r1 32B"
echo "6) deepseek-r1 70B"
echo "7) deepseek-r1 671B"
read -p "请输入选项(1-7): " model_choice

# Map choice to model_version
case $model_choice in
    1) model_version="deepseek-r1:1.5B" ;;
    2) model_version="deepseek-r1:7B" ;;
    3) model_version="deepseek-r1:8B" ;;
    4) model_version="deepseek-r1:14B" ;;
    5) model_version="deepseek-r1:32B" ;;
    6) model_version="deepseek-r1:70B" ;;
    7) model_version="deepseek-r1:671B" ;;
    *) echo "无效选择,默认使用 deepseek-r1 1.5B"
       model_version="deepseek-r1:1.5B" ;;
esac

# Ensure Ollama is installed
check_ollama_installed

echo "正在启动 ollama 服务..."
nohup ollama serve > ollama_output.log 2>&1 &

echo "等待 10 秒钟,确保服务启动完成..."
sleep 10

# Run the selected model
echo "正在使用 ${model_version} 模型..."
ollama run "${model_version}"

Detailed Explanation of the Script

Check and install Ollama :The script first verifies whether the ollama command exists. If not, it downloads and runs the official install script via curl -fsSL https://ollama.com/install.sh | sh.

Model version selection :Users are prompted to choose a DeepSeek‑r1 variant; the chosen version string is stored in the model_version variable.

Start Ollama service :The script launches the Ollama daemon in the background with nohup ollama serve > ollama_output.log 2>&1 &, ensuring it keeps running after the terminal closes.

Wait for model startup :A sleep 10 pause gives the service time to become ready.

Launch the model :Finally, ollama run "${model_version}" starts the selected DeepSeek‑r1 model.

Running the Script

Save the script content to a file named deploy_deepseek.sh.

Make the script executable: chmod +x deploy_deepseek.sh Execute the script: ./deploy_deepseek.sh During execution you will see prompts to choose a model version, followed by automatic installation, service startup, and model pulling. After the model is ready you can test it with a simple HTTP request, for example:

curl http://localhost:11434/api/generate -d '{
  "model": "deepseek-r1:1.5b",
  "prompt": "Why is the sky blue?"
}'

Note: If you do not have high‑end hardware, purchasing tokens from the official DeepSeek service may be more cost‑effective than running the model locally.

Conclusion

The provided Bash script streamlines local deployment of any DeepSeek‑r1 variant by handling Ollama installation, service management, version selection, and model launch, saving time and reducing the complexity of manual setup.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AImodel deploymentDeepSeekLocal DeploymentOllamashell script
Dunmao Tech Hub
Written by

Dunmao Tech Hub

Sharing selected technical articles synced from CSDN. Follow us on CSDN: Dunmao.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.