Run DeepSeek R1 Locally for Free – Integrate AI into VSCode with LM Studio, Ollama, Jan

This guide shows how to set up the free, open‑source DeepSeek R1 large language model locally using LM Studio, Ollama, or Jan, choose the appropriate model size for your hardware, and integrate it into Visual Studio Code as a code‑assistant without any cost.

21CTO
21CTO
21CTO
Run DeepSeek R1 Locally for Free – Integrate AI into VSCode with LM Studio, Ollama, Jan

If you are looking for a powerful, open‑source, free AI model, the newly released DeepSeek R1 is a solid choice, comparable to GPT‑4, o1‑mini, and Claude 3.5, and often outperforming them.

Why is DeepSeek R1 generating so much buzz?

Free and open‑source : No subscription fees; you can chat at https://chat.deepseek.com.

Performance : Excels in logic, mathematics, and code generation tasks.

Multiple versions : Model sizes range from 1.5B to 70B parameters, allowing selection based on your PC’s capabilities.

Easy integration : Extensions like Cline or Roo Code can connect it to VSCode.

No cost to run locally : No token or API fees; GPU is recommended for reasonable speed.

Important tips before you start

Save resources : Use smaller models (1.5B or 7B) or quantized versions on less powerful machines.

RAM calculator : Use LLM Calc to determine the minimum memory needed.

Privacy : Running locally keeps your data on your computer.

Cost : Local execution is free; the DeepSeek API is cheap if you need it.

Choosing the right model version

1.5B parameters

Memory : ~4 GB

GPU : Integrated graphics (e.g., NVIDIA GTX 1050) or modern CPU

Use case : Simple tasks on ordinary PCs

7B parameters

Memory : ~8‑10 GB

GPU : Dedicated (e.g., NVIDIA GTX 1660 or better)

Use case : Intermediate tasks on better hardware

70B parameters

Memory : ~40 GB

GPU : High‑end (e.g., NVIDIA RTX 3090 or higher)

Use case : Complex tasks on powerful PCs

How to run DeepSeek R1 locally

1. Using LM Studio

Download and install LM Studio from its official website.

In LM Studio, go to the “Discover” tab, search for “DeepSeek R1”, and select the version compatible with your system (MLX for Apple silicon, GGUF for Windows/Linux).

Load the model via the “Local Models” section and click “Load”.

Start the local server in the “Developer” tab by enabling “Start Server”. The server will run at http://localhost:1234.

Proceed to step 4 to integrate with VSCode.

2. Using Ollama

Install Ollama from its website.

Pull the model in a terminal: ollama pull deepseek-r1* Use the smaller model if needed from https://ollama.com/library/deepseek-r1.

Start the server with: ollama serve which launches the model at http://localhost:11434.

Proceed to step 4 to integrate with VSCode.

3. Using Jan

Download and install Jan from its website.

Since Jan doesn’t list DeepSeek R1 directly, obtain the model from Hugging Face (search “unsloth gguf deepseek r1”) and download it via Jan.

Load the model in Jan and start its server, which runs at http://localhost:1337.

Proceed to step 4 to integrate with VSCode.

4. Integrate with VSCode

Install the Cline or Roo Code extension from the VSCode marketplace.

Open the extension’s settings, set the API provider to “LM Studio”, “Ollama”, or “Jan” as appropriate.

Enter the base URL (e.g., http://localhost:1234, http://localhost:11434, or http://localhost:1337) in the “Base URL” field.

If only one model is available, the Model ID field auto‑fills; otherwise, select the DeepSeek model you downloaded.

Click “Done” to finish the configuration.

Conclusion: For anyone who wants a powerful AI without spending money, DeepSeek R1 combined with LM Studio, Ollama, or Jan lets you run the model locally and integrate it directly into Visual Studio Code.

Artificial IntelligenceVSCodeDeepSeek-R1OllamaJanLM Studio
21CTO
Written by

21CTO

21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.