Building AI Assistants with Eino: A Go Framework for Large‑Model Applications
This article introduces Eino, an open‑source Golang framework for large‑model AI applications, explains its core capabilities, walks through creating a simple AI assistant with message templates and chat model integration, and demonstrates how to extend the system with tools and a modular architecture for future expansion.
What is Eino?
Eino is an open‑source Golang large‑model application framework designed to help developers efficiently build AI‑driven services. It is the primary full‑code development framework for large‑model applications inside ByteDance, already used by products such as Doubao, Douyin, and Kousi.
Eino capabilities
Provides a set of component abstractions and implementations for easy reuse and composition when building LLM applications.
Offers a powerful orchestration layer that handles type checking, stream processing, concurrency management, aspect injection, and option assignment.
Delivers a clean, well‑designed API.
Continuously expands best‑practice collections through flows and examples.
Includes DevOps tools covering visual debugging, online tracing, and the entire development lifecycle.
These capabilities enable Eino to standardize, simplify, and accelerate each stage of the AI‑application lifecycle.
Creating a simple AI assistant
In Eino, a conversation is represented by
schema.Message, which includes the following fields:
Role : the role the model plays (e.g., programmer, ops engineer).
system : system prompt that defines behavior and role.
user : user input.
assistant : model response.
tool : result of a tool call.
Content : the actual message content.
Running diagram
Creating a dialog template and generating messages
Eino provides a powerful templating feature to build messages for the LLM.
func msgTemplate() prompt.ChatTemplate {
// Create template using FString format
return prompt.FromMessages(schema.FString,
// System message template
schema.SystemMessage("你是一个{role}。你需要用{style}的语气回答问题。你的目标是帮助运维同学,为他们提供技术支持和业务建议。"),
// Placeholder for chat history (empty for new dialogs)
schema.MessagesPlaceholder("chat_history", true),
// User message template
schema.UserMessage("问题: {question}"),
)
}
func CreateMsgTemplate(role string, style string, question string, chatHistory []*schema.Message) []*schema.Message {
template := msgTemplate()
// Generate messages from the template
messages, err := template.Format(context.Background(), map[string]any{
"role": role,
"style": style,
"question": question,
"chat_history": chatHistory,
})
if err != nil {
log.Fatalf("format template failed: %v
", err)
}
return messages
}Connecting the LLM
Creating a chat model requires three parameters: token key, model name, and service base URL.
// CreateOpenAIChatModel creates an OpenAI chat model instance.
func CreateOpenAIChatModel(ctx context.Context) model.ChatModel {
var key, modelName, baseUrl string
if conf.GlobalConfig != nil {
key = conf.GlobalConfig.LLMInfo.Key
baseUrl = conf.GlobalConfig.LLMInfo.BaseUrl
modelName = modelCategory.Qwen3_32B.String()
} else {
// Default fallback values
key = "xxxxxxxxxxx"
modelName = "DeepSeek-V3"
baseUrl = "http://xxx.xxx.xxx/v1/"
}
chatModel, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
BaseURL: baseUrl,
Model: modelName,
APIKey: key,
})
if err != nil {
log.Fatalf("create openai chat model failed, err=%v", err)
}
return chatModel
}Full call flow
The following code assembles messages, creates the LLM instance, and runs the interaction, preserving chat history for context.
// Simulated chat history (optional)
chatHistory := []*schema.Message{
schema.UserMessage("你好"),
schema.AssistantMessage("嘿!我是资深程序员X,有什么我可以帮你的吗?", nil),
schema.UserMessage("我想知道如何学好一门编程语言?"),
schema.AssistantMessage("学习编程语言的关键:1. 选对语言;2. 夯实基础;3. 多写代码;4. 善用资源;5. 长期迭代。核心:动手比理论更重要。", nil),
}
role := "资深程序员"
style := "积极、温暖且专业,回复简洁清晰"
messages := reporter.CreateMsgTemplate(role, style, input, chatHistory)
log.Printf("===create llm===
")
cm := chatmodel.CreateOpenAIChatModel(ctx)
log.Printf("create llm success
")
log.Printf("===llm generate===
")
result := reporter.Generate(ctx, cm, messages)
fmt.Printf("🤖AI Assistant: %s
", result.Content)Tool: Giving the model hands
Tools act as executors for an agent, exposing concrete functionality with defined parameters. They enable the model to perform actions such as data manipulation, external service calls, or database operations.
Create and configure the ChatModel.
Initialize tools.
Create and configure a lambda for custom processing.
Build the complete processing chain.
Compile and run the chain.
// 1. Create and configure ChatModel
log.Printf("===create llm===
")
chatModel := chatmodel.CreateOpenAIChatModel(ctx)
// 2. Initialize tools
tools := getToolInfos(ctx)
toolInfos := make([]*schema.ToolInfo, 0, len(tools))
for _, todoTool := range tools {
info, err := todoTool.Info(ctx)
if err != nil {
logs.Infof("get ToolInfo failed, err=%v", err)
}
toolInfos = append(toolInfos, info)
}
// Bind tools to the model
if err := chatModel.BindTools(toolInfos); err != nil {
logs.Errorf("BindTools failed, err=%v", err)
return
}
// 3. Create tools node
todoToolsNode, err := compose.NewToolNode(ctx, &compose.ToolsNodeConfig{Tools: tools})
if err != nil {
logs.Errorf("NewToolNode failed, err=%v", err)
return
}
// 4. Create lambda node
lambda := compose.InvokableLambda(func(ctx context.Context, input []*schema.Message) (output []*schema.Message, err error) {
for _, msg := range input {
if msg.Role == schema.Tool {
logs.Infof("tool message: processed via lambda")
}
}
return input, nil
})
// 5. Build chain
chain := compose.NewChain[[]*schema.Message, []*schema.Message]()
chain.AppendChatModel(chatModel, compose.WithNodeName("chat_model"))
chain.AppendToolsNode(todoToolsNode, compose.WithNodeName("tools"))
chain.AppendLambda(lambda, compose.WithNodeName("lambda"))
agent, err := chain.Compile(ctx)
if err != nil {
logs.Errorf("chain.Compile failed, err=%v", err)
return
}Example: Database operation wrapped as a tool
// AddScheduleFunc implements a tool that inserts a schedule into the database.
func AddScheduleFunc(_ context.Context, params *structs.AddScheduleParams) (string, error) {
logs.Infof("invoke tool add_schedule: %+v", params)
repo := internal.NewScheduleRepository(conf.DB)
err := repo.Create(context.Background(), &internal.Schedule{
Title: params.Title,
StartTime: structs.GetTimeStampByTimeStr(params.StartTime),
EndTime: structs.GetTimeStampByTimeStr(params.EndTime),
Memo: params.Memo,
Location: params.Location,
CreatedAt: time.Now().Unix(),
UpdatedAt: time.Now().Unix(),
})
if err != nil {
panic(err)
}
return `{"msg": "add schedule success"}`, nil
}Future outlook
Eino’s architecture is divided into several layers, each exposing distinct responsibilities.
Flow layer
ReAct Agent : combines reasoning and acting, allowing the model to plan and execute tool calls.
Multi Agent : a system of cooperating agents for complex tasks.
Router Retriever : uses embeddings to retrieve relevant indexed content.
Compose layer
Engine components such as pregel (directed cyclic graph) and dag (directed acyclic graph).
APIs for Graph , Chain , and Workflow construction.
Building blocks: node, edge, branch, state, option, stream, callbacks, lambda.
Schema layer
Message, Document, StreamReader, StreamWriter, ToolInfo definitions.
Callbacks layer
Handler, Inject, Trigger for event‑driven processing.
Eino‑Ext (DevOps tooling)
Visualized debugging, graph canvas, code generator, graph visualizer, prompt optimizer, evaluators.
Overall, Eino provides a flexible, extensible core framework, while Eino‑Ext enriches it with practical DevOps and AI‑application features, making it powerful for development, operations, and advanced functionality.
Architecture & Thinking
🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.