How to Build a Generative AI App with Ollama and Spring Boot

This guide walks you through setting up Ollama for local large‑model serving, creating a Spring Boot project with Spring AI support, writing a unit test to query the model, and explains how to add the necessary dependencies for AI integration in existing Java applications.

Programmer DD
Programmer DD
Programmer DD
How to Build a Generative AI App with Ollama and Spring Boot

To build a generative AI application you need to complete two parts:

AI large‑model service – either use a major provider’s API or self‑host; this article uses Ollama for self‑deployment.

Application construction – invoke the AI model’s capabilities within business logic; we use Spring Boot + Spring AI.

Ollama Installation and Usage

Visit the official website https://ollama.com/ , download, install, and start Ollama.

Ollama installation
Ollama installation

Building the Spring Application

Use spring initializr to create a Spring Boot project.

Select Spring Web and Spring AI dependencies for Ollama support.

Spring Initializr selection
Spring Initializr selection

Click the “generate” button to download the project.

Open the project with IDEA or any preferred IDE; the project structure appears as shown.

Project structure
Project structure

Write a unit test to call the local Ollama service from the Spring Boot application.

@SpringBootTest(classes = DemoApplication.class)
class DemoApplicationTests {

    @Autowired
    private OllamaChatModel chatModel;

    @Test
    void ollamaChat() {
        ChatResponse response = chatModel.call(
            new Prompt(
                "Spring Boot适合做什么?",
                OllamaOptions.builder()
                    .withModel(OllamaModel.LLAMA3_1)
                    .withTemperature(0.4)
                    .build()
            )
        );
        System.out.println(response);
    }

}

Running the test produces output similar to:

ChatResponse [metadata={ id:, usage:{ promptTokens:17, generationTokens:275, totalTokens:292}, rateLimit: org.springframework.ai.chat.metadata.EmptyRateLimit@7b3feb26}, generations=[Generation[assistantMessage=AssistantMessage[messageType=ASSISTANT, toolCalls=[], textContent=SpringBoot是一个基于Java的快速开发框架,主要用于创建独立的、生产级别的应用程序。它提供了一个简化的配置过程,使得开发者能够快速构建和部署Web应用程序。...]]]

Conclusion

With this guide you have connected a Spring Boot application to an Ollama‑run AI model. The next step is to integrate this capability with your own business logic.

Possible questions

How to use other AI models? Visit Ollama’s Models page to choose and start a different model.

Ollama models page
Ollama models page

How to add AI to an existing application? Open the project’s pom.xml and add the following dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>

Thus, adding the spring-ai-ollama-spring-boot-starter dependency is sufficient for existing projects.

That’s all for today’s sharing.

JavaSpring BootGenerative AIAI integrationOllama
Programmer DD
Written by

Programmer DD

A tinkering programmer and author of "Spring Cloud Microservices in Action"

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.