Understanding LLM API Types and Usage in LangChain4j
This article explains the different low‑level LLM API types in LangChain4j, including LanguageModel, ChatLanguageModel, and other model interfaces, and shows how to create and combine ChatMessage objects for multi‑turn conversations.
LLM API Types in LangChain4j
LangChain4j provides two main interfaces for interacting with large language models.
LanguageModel
A simple API that takes a String prompt and returns a String response. This interface is being deprecated in favor of the chat‑based API.
ChatLanguageModel
The preferred API. It accepts one or more ChatMessage objects and returns a Response<AiMessage>. ChatMessage can contain plain text and, for models that support multimodal input (e.g., OpenAI gpt-4o-mini, Google gemini-1.5-pro), optional images.
Higher‑level constructs such as Chain and AiServices are built on top of ChatLanguageModel.
Additional Model Types
EmbeddingModel: converts text to an Embedding vector. ImageModel: generates or edits images. ModerationModel: checks text for harmful content. ScoringModel: scores or ranks multiple texts against a query, useful for RAG scenarios.
ChatLanguageModel Interface
public interface ChatLanguageModel {
String generate(String userMessage);
Response<AiMessage> generate(ChatMessage... messages);
Response<AiMessage> generate(List<ChatMessage> messages);
// ... other overloads
}The generate(String) method is a convenience wrapper that internally creates a UserMessage. The core methods accept ChatMessage instances and return a Response containing the AiMessage plus metadata: TokenUsage: counts of input and output tokens, useful for cost estimation. FinishReason: enum indicating why generation stopped (e.g., STOP).
ChatMessage Types
UserMessage: originates from the end user or the application; can contain text and optionally images. AiMessage: model‑generated reply; may contain plain text or a ToolExecutionRequest. ToolExecutionResultMessage: result of a tool execution request. SystemMessage: developer‑defined system prompt that guides model behavior; typically placed at the start of a conversation.
Creating a UserMessage
Common factories:
UserMessage msg = new UserMessage("Hi");
// or
UserMessage msg = UserMessage.from("Hi");Single‑turn Interaction
Pass a single UserMessage to generate. The method returns Response<AiMessage>, from which you can obtain the content via .content() and inspect token usage.
Multi‑turn Conversations
LLMs are stateless; to maintain context you must supply the full message history on each call. Example:
UserMessage first = UserMessage.from("Hello, my name is JavaEdge");
AiMessage reply1 = model.generate(first).content(); // "JavaEdge, how can I help?"
UserMessage second = UserMessage.from("What is my name?");
AiMessage reply2 = model.generate(first, reply1, second).content(); // "JavaEdge"Manually managing the list quickly becomes cumbersome. LangChain4j offers a ChatMemory abstraction that stores and retrieves the conversation history automatically.
JavaEdge
First‑line development experience at multiple leading tech firms; now a software architect at a Shanghai state‑owned enterprise and founder of Programming Yanxuan. Nearly 300k followers online; expertise in distributed system design, AIGC application development, and quantitative finance investing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
