How to Build AI‑Powered Java Apps with Helidon and LangChain4j
This article explains how Helidon 4.2 integrates the LangChain4j framework to simplify adding large‑language‑model capabilities, covering core features, Maven setup, configuration, component creation, dependency injection, annotations, custom tools, and sample applications such as a coffee‑shop assistant.
Introduction
The rise of large language models (LLMs) opens new possibilities for AI‑driven applications, but integrating these models into Java projects can be cumbersome. LangChain4j is a Java framework that streamlines AI development by providing a type‑safe, declarative API for interacting with LLM providers such as OpenAI, Cohere, and Hugging Face.
What is LangChain4J?
LangChain4J offers four core capabilities:
AI services : declarative, type‑safe APIs for model interaction.
Retrieval‑augmented generation (RAG) : external knowledge sources improve response quality.
Embeddings and vector search : support for embedding stores and similarity search.
Memory and context management : enable intelligent, stateful conversations.
Manually wiring LangChain4J requires explicit component configuration, dependency management, and injection handling. Helidon’s integration module addresses these pain points.
How Helidon Simplifies LangChain4J Integration
Helidon 4.2 introduces a preview feature that provides seamless LangChain4J integration, allowing developers to retain Helidon’s programming model while reducing boilerplate. The integration brings several advantages:
Helidon Inject support : automatically creates LangChain4J components and registers them with the Helidon service registry.
Convention over configuration : sensible defaults minimise repetitive code.
Declarative AI services : annotations define AI services in a clear, structured way.
CDI compatibility : components cooperate smoothly within Helidon MP.
These features markedly lower the complexity of adding AI to Helidon applications.
Configuring LangChain4J in Helidon
Add the following Maven dependency to enable LangChain4J:
<dependency>
<groupId>io.helidon.integrations.langchain4j</groupId>
<artifactId>helidon-integrations-langchain4j</artifactId>
</dependency>Configure the annotation processor in pom.xml:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>io.helidon.codegen</groupId>
<artifactId>helidon-codegen-apt</artifactId>
</path>
<path>
<groupId>io.helidon.integrations.langchain4j</groupId>
<artifactId>helidon-integrations-langchain4j-codegen</artifactId>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>If you use a different LLM provider, add the corresponding provider dependency (e.g., OpenAI or Ollama). This modular approach keeps the application lightweight.
Creating AI Components in Helidon
LangChain4J supplies public API classes that represent AI components, for example: OpenAiChatModel / OllamaChatModel – chat models. InMemoryEmbeddingStore – an in‑memory vector store. EmbeddingStoreContentRetriever – content retriever. EmbeddingStoreIngestor – data ingestor.
Helidon can automatically create a subset of these components. Supported auto‑created components include:
LangChain4J Core : EmbeddingStoreContentRetriever, MessageWindowChatMemory.
OpenAI : OpenAiChatModel, OpenAiStreamingChatModel, OpenAiEmbeddingModel, OpenAiImageModel, OpenAiLanguageModel, OpenAiModerationModel.
Ollama : OllamaChatModel, OllamaStreamingChatModel, OllamaEmbeddingModel, OllamaLanguageModel.
Cohere : CohereEmbeddingModel, CohereScoringModel.
Oracle : OracleEmbeddingStore.
Example application.yaml to enable an OpenAI chat model:
langchain4j:
open-ai:
chat-model:
enabled: true
api-key: "demo"
model-name: "gpt-4o-mini"The enabled flag must be true for the component to be instantiated; setting it to false disables creation while preserving the configuration for future use.
To register custom components, use the Supplier Factory pattern:
@Service.Singleton
@Service.Named("MyChatModel")
class ChatModelFactory implements Supplier<ChatLanguageModel> {
@Override
public ChatLanguageModel get() {
return OpenAiChatModel.builder()
.apiKey("demo")
.build();
}
}The @Service.Named("MyChatModel") annotation gives the component a name that can be referenced later.
Using AI Components
Helidon Inject lets you inject AI components directly into services:
@Service.Singleton
public class MyService {
private final ChatLanguageModel chatModel;
@Service.Inject
public MyService(ChatLanguageModel chatModel) {
this.chatModel = chatModel;
}
}Named injection:
@Service.Inject
public MyService(@Service.Named("MyChatModel") ChatLanguageModel chatModel) {
this.chatModel = chatModel;
}Manual lookup from the service registry:
var chatModel = Services.get(OpenAiChatModel.class);AI Services
An AI service typically combines several components:
Chat model for user interaction.
Embedding store for data persistence.
Embedding model for vector generation and retrieval.
Chat memory to maintain conversational context.
LangChain4J exposes the following interfaces:
dev.langchain4j.model.chat.ChatLanguageModel dev.langchain4j.model.chat.StreamingChatLanguageModel dev.langchain4j.memory.ChatMemory dev.langchain4j.memory.chat.ChatMemoryProvider dev.langchain4j.model.moderation.ModerationModel dev.langchain4j.rag.content.retriever.ContentRetriever dev.langchain4j.rag.RetrievalAugmentorAnnotations control which component is used. Key annotations include:
@Ai.ChatModel : selects a chat model; mutually exclusive with @Ai.StreamingChatModel.
@Ai.StreamingChatModel : selects a streaming chat model; exclusive with @Ai.ChatModel.
@Ai.ChatMemory : provides a chat memory; exclusive with @Ai.ChatMemoryWindow and @Ai.ChatMemoryProvider.
@Ai.ChatMemoryWindow : adds a MessageWindowChatModel with a fixed window size; exclusive with the other memory annotations.
@Ai.ChatMemoryProvider : supplies a custom memory provider; exclusive with the other memory annotations.
@Ai.ModerationModel : registers a moderation model.
@Ai.ContentRetriever : registers a content retriever; exclusive with @Ai.RetrievalAugmentor.
@Ai.RetrievalAugmentor : registers a retrieval augmentor; exclusive with @Ai.ContentRetriever.
Auto‑discovery can be disabled with @Ai.Service(autodiscovery=false), requiring explicit annotation of each component.
Tools: Extending AI Capabilities with Custom Logic
LangChain4J tools let an LLM invoke external functions during a conversation. Annotate a method with @Tool (or @Ai.Tool in Helidon MP) and the framework makes it callable by the model.
@Service.Singleton
public class OrderService {
@Tool("Get order details by order number")
public Order getOrderDetails(String orderNumber) {
// business logic
}
}Sample Applications
Several example projects illustrate the different aspects of using LangChain4J with Helidon.
Coffee Shop Assistant
The coffee‑shop assistant demo shows how to build an AI‑driven assistant for a café. It can answer menu questions, give recommendations, and create orders, using a JSON‑backed embedding store.
Integration with an OpenAI chat model.
Use of embedding models, stores, ingestors, and content retrievers.
Helidon Inject for dependency injection.
JSON‑initialised embedding store.
Callback functions to enrich interaction.
Both Helidon SE and Helidon MP versions are provided.
Hands‑On Tutorial
A step‑by‑step tutorial walks you through building the coffee‑shop assistant from scratch.
Useful Resources
Helidon LangChain4J documentation
LangChain4J documentation
Helidon Inject documentation
JakartaEE China Community
JakartaEE China Community, official website: jakarta.ee/zh/community/china; gitee.com/jakarta-ee-china; space.bilibili.com/518946941; reply "Join group" to get QR code
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
