Tag

AI model deployment

0 views collected around this technical thread.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Feb 24, 2025 · Artificial Intelligence

Deploying the DeepSeek Large Language Model Locally with Ollama on Windows

This guide explains how to install Ollama on a Windows machine, configure its environment, and use it to download and run the DeepSeek‑R1 1.5B large language model locally, enabling offline AI interactions without relying on remote servers.

AI model deploymentDeepSeekLocal LLM
0 likes · 4 min read
Deploying the DeepSeek Large Language Model Locally with Ollama on Windows
macrozheng
macrozheng
Feb 22, 2025 · Artificial Intelligence

Choosing the Right DeepSeek‑R1 Model: Hardware Needs & Use Cases Explained

This guide compares DeepSeek‑R1’s 1.5B/7B/8B, 14B/32B, and 70B/671B versions, detailing their characteristics, typical applications, and the specific CPU, memory, and GPU specifications required for local deployment, helping you select the optimal model for your resources.

AI model deploymentDeepSeekLarge Language Models
0 likes · 7 min read
Choosing the Right DeepSeek‑R1 Model: Hardware Needs & Use Cases Explained
ByteDance Cloud Native
ByteDance Cloud Native
Feb 21, 2025 · Artificial Intelligence

Deploy DeepSeek‑R1‑Distill on Volcengine CPU Cloud for Low‑Cost AI Inference

This guide walks you through deploying the DeepSeek‑R1‑Distill model on Volcengine CPU ECS instances, covering use‑case scenarios, recommended server types, Docker setup, environment configuration, and verification steps to achieve cost‑effective, high‑compatibility AI inference.

AI model deploymentCPU inferenceDeepSeek
0 likes · 6 min read
Deploy DeepSeek‑R1‑Distill on Volcengine CPU Cloud for Low‑Cost AI Inference
JD Tech Talk
JD Tech Talk
Feb 11, 2025 · Artificial Intelligence

Step-by-Step Guide to Deploying DeepSeek Locally with Cherry Studio

This guide walks you through registering on SiliconFlow, selecting DeepSeek models, installing Cherry Studio, configuring API keys, setting up the environment, and testing the AI assistant, enabling a full‑feature local deployment without high‑end hardware.

AI model deploymentArtificial IntelligenceCherry Studio
0 likes · 6 min read
Step-by-Step Guide to Deploying DeepSeek Locally with Cherry Studio
Architect
Architect
Feb 2, 2025 · Artificial Intelligence

Deploying DeepSeek‑R1 Locally with Ollama and Accessing It via Spring Boot and Spring AI

This guide explains how to install Ollama, download and run the open‑source DeepSeek‑R1 language model locally, configure GPU acceleration, and integrate the model into a Spring Boot application using Spring AI to provide an API service for AI inference.

AI model deploymentDeepSeek-R1GPU Acceleration
0 likes · 12 min read
Deploying DeepSeek‑R1 Locally with Ollama and Accessing It via Spring Boot and Spring AI
Tencent Cloud Developer
Tencent Cloud Developer
Feb 2, 2025 · Artificial Intelligence

Deploying DeepSeek-R1 Models on Tencent Cloud HAI Platform

Deploy DeepSeek‑R1 models on Tencent Cloud HAI in just three minutes by logging in, creating an application, and accessing the model via ChatbotUI or JupyterLab, without purchasing GPUs or configuring environments, while also leveraging integrated services like Cloud Studio and Object Storage for enterprise AI solutions.

AI model deploymentChatbotUIDeepSeek-R1
0 likes · 3 min read
Deploying DeepSeek-R1 Models on Tencent Cloud HAI Platform
360 Smart Cloud
360 Smart Cloud
Apr 15, 2024 · Artificial Intelligence

Fine‑Tuning Qwen‑14B Large Language Model: A Complete Guide

This article provides a comprehensive tutorial on fine‑tuning the Qwen‑14B large language model, covering the motivation, fine‑tuning concepts, step‑by‑step workflow, required code, DeepSpeed training parameters, testing scripts, and deployment using FastChat and the 360AI platform.

AI model deploymentDeepSpeedFastChat
0 likes · 9 min read
Fine‑Tuning Qwen‑14B Large Language Model: A Complete Guide