LLM vs. ChatModel in LangChain: Choosing the Right Interface

This article explains LangChain's two core abstractions—LLM for simple text completion and ChatModel for multi‑turn conversational AI—detailing their input/output formats, practical code examples, and why ChatModel is generally preferred for modern dialogue applications.

BirdNest Tech Talk
BirdNest Tech Talk
BirdNest Tech Talk
LLM vs. ChatModel in LangChain: Choosing the Right Interface

LLM (Text‑Completion Model)

The LLM abstraction follows a straightforward "text‑in, text‑out" pattern: it accepts a single string and returns a single string, making it ideal for tasks that do not require conversation history, such as completing a sentence.

from langchain_openai import OpenAI

# OpenAI() defaults to a completion model like "gpt-3.5‑turbo‑instruct"
llm = OpenAI()
response = llm.invoke("从前有座山,")
# response might be "山里有座庙,庙里有个老和尚..."

While the LLM interface was central to early LangChain designs, most contemporary models expose richer chat‑completion capabilities, so the ChatModel abstraction is now recommended for new projects.

ChatModel (Chat‑Based Model)

The ChatModel abstraction is built around the concept of ChatMessage objects, allowing a list of messages to be passed in and returning an AIMessage. This structure naturally supports multi‑turn dialogues and role‑playing.

Input : a list of ChatMessage objects.

Output : an AIMessage object.

Three common message types are: SystemMessage: sets the AI’s role, personality, and behavior rules; usually placed at the start of the list. HumanMessage: represents the user’s input. AIMessage: contains the model’s reply and can be used to display prior AI utterances.

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

chat = ChatOpenAI()
messages = [
    SystemMessage(content="你是一个乐于助人的助手。"),
    HumanMessage(content="你好吗?"),
]
response = chat.invoke(messages)
# response is an AIMessage object

Why Prefer ChatModel?

Structured Dialogue : the message list inherently supports multi‑turn interactions and role‑based conversations.

Stronger Capabilities : the most powerful models today (e.g., GPT‑4, Claude 3) expose their full feature set—including function calling—through the chat interface.

Better Performance : many providers have optimized their services specifically for chat mode.

Conclusion

For simple, single‑turn text‑completion tasks, the LLM interface remains usable.

For any new application, especially those involving dialogue, the ChatModel interface is strongly recommended.

In the remainder of the chapter, examples demonstrate how to invoke both LLM and ChatModel, and how to build a basic conversation history using ChatModel.

References

How to: initialize any model in one line – https://python.langchain.com/docs/how_to/model_init

How to: work with local models – https://python.langchain.com/docs/how_to/local_models

How to: cache model responses – https://python.langchain.com/docs/how_to/cache_model_responses

How to: get log probabilities – https://python.langchain.com/docs/how_to/get_logprobs

How to: create a custom chat model class – https://python.langchain.com/docs/how_to/custom_chat_model

How to: stream a response back – https://python.langchain.com/docs/how_to/chat_stream

How to: track token usage – https://python.langchain.com/docs/how_to/track_token_usage

How to: track response metadata across providers – https://python.langchain.com/docs/how_to/track_response_metadata

PythonAILLMprompt engineeringLangChainChatModel
BirdNest Tech Talk
Written by

BirdNest Tech Talk

Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.