How to Build a Spring AI Hello World with Ollama and DeepSeek Locally
This step‑by‑step tutorial shows how to install Ollama, pull the DeepSeek‑R1 model, create a Spring Boot project with the Spring AI Ollama starter, code a ChatController, and test a local AI "Hello World" integration, illustrating AI‑enhanced backend development.
Overview
Spring AI 1.0 has been released; this guide demonstrates how to build a simple "Hello World" AI application by installing Ollama, pulling the DeepSeek‑R1 model, creating a Spring Boot project, writing a ChatController, and testing the interaction.
Step 1: Install Ollama
Download Ollama from https://ollama.com/download, install it to a custom directory (e.g., D:\Ollama ), and set the model directory environment variable:
<code>setx OLLAMA_MODELS "D:\Ollama\models" /M</code>Verify the installation with ollama -v , which should output version 0.5.11.
Step 2: Install DeepSeek
From the Ollama site, pull the lightweight 1.5b DeepSeek‑R1 model (1.5 billion parameters):
<code>ollama pull deepseek-r1:1.5b</code>Run the model locally with:
<code>ollama run deepseek-r1:1.5b</code>Step 3: Create Spring AI Project
Generate a Spring Boot project with Spring Initializr (Java 17) and replace the default OpenAI starter with the Ollama starter in pom.xml :
<code><dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-ollama</artifactId>
</dependency>
</code>Configure application.yml to point to the local Ollama server and the DeepSeek model:
<code>spring:
http:
encoding:
charset: UTF-8
enable: true
force: true
ai:
ollama:
base-url: http://localhost:11434
chat:
model: deepseek-r1:1.5b
</code>Step 4: Implement Chat Controller
Use Spring AI’s ChatClient to forward user messages to the model:
<code>package com.myai.demo.controller;
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.web.bind.annotation.*;
@RestController
@RequestMapping("/ai")
public class ChatController {
private final ChatClient chatClient;
public ChatController(ChatClient.Builder chatClient) {
this.chatClient = chatClient.build();
}
@GetMapping("/chat")
public String chat(@RequestParam("message") String message) {
try {
return chatClient.prompt().user(message).call().content();
} catch (Exception e) {
return "Exception";
}
}
}
</code>Step 5: Test and Summary
Run the Spring Boot application and send a request to /ai/chat?message=hello . The model replies with a friendly greeting, confirming that the "Hello World" AI integration works.
This tutorial shows that a Java backend can interact with a locally hosted large language model, opening many possibilities for AI‑enhanced applications.
Full-Stack Internet Architecture
Introducing full-stack Internet architecture technologies centered on Java
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.