How to Build a Multi‑Agent LLM Flow in Go with Eino – Deer‑Go Deep Dive
This article explains how to re‑implement ByteDance's DeerFlow deep‑research framework in Go (Deer‑Go), covering the multi‑agent architecture, control‑hand‑off, interrupt & checkpoint mechanisms, integration with the Hertz SSE server, and step‑by‑step deployment instructions.
Introduction
DeerFlow is an open‑source deep‑research framework from ByteDance that combines large language models (LLM) with professional tools such as web search, crawlers, and Python code execution. The author recreated the framework in Go, naming the implementation Deer‑Go , and published the source at eino-deer-flow-go .
Multi‑Agent Architecture in Eino
In DeerFlow there are two types of agent communication: control‑hand‑off (e.g., Coordinator → Planner) and data‑state sharing (e.g., Planner → ResearchTeam). Because Eino requires each node’s input to come from the previous node with matching types, Deer‑Go implements control‑hand‑off via node inputs/outputs and shares data through a global State object.
Control transfer: Coordinator passes user information to Planner, which creates a research plan.
Data sharing: Planner’s plan is delivered to the Research Team, which passes results to the Reporter for summarisation.
Each sub‑agent is a sub‑graph node containing a complete flow with three functional blocks: load node – loads prompts, tools, etc. llm node – sends the loaded input to the large model. router node – processes LLM output, decides the next agent name, and writes it to the global state.
Router and Branch Nodes
Branch nodes provided by Eino transfer control between sub‑graphs. Example code adds Coordinator and Planner nodes and attaches branch nodes:
// In the main graph add Coordinator and Planner nodes
_ = g.AddGraphNode(consts.Coordinator, coordinatorGraph, compose.WithNodeName(consts.Coordinator))
_ = g.AddGraphNode(consts.Planner, plannerGraph, compose.WithNodeName(consts.Planner))
// Add branch nodes after them
_ = g.AddBranch(consts.Coordinator, compose.NewGraphBranch(agentHandOff, outMap))
_ = g.AddBranch(consts.Planner, compose.NewGraphBranch(agentHandOff, outMap))The router node decides which agent to hand off to and writes the target name into state.Goto. The agentHandOff function reads this variable and transfers execution to the corresponding agent.
func router(ctx context.Context, input *schema.Message, opts ...any) (output string, err error) {
err = compose.ProcessState[*model.State](ctx, func(_ context.Context, state *model.State) error {
defer func() { output = state.Goto }()
state.Goto = compose.END // default to end node
if len(input.ToolCalls) > 0 {
// ...
if state.EnableBackgroundInvestigation {
state.Goto = consts.BackgroundInvestigator // hand off to background agent
} else {
state.Goto = consts.Planner // hand off to planner agent
}
}
return nil
})
return output, nil
}
func agentHandOff(ctx context.Context, input string) (next string, err error) {
defer func() { ilog.EventInfo(ctx, "agent_hand_off", "input", input, "next", next) }()
_ = compose.ProcessState[*model.State](ctx, func(_ context.Context, state *model.State) error {
next = state.Goto
return nil
})
return next, nil
}Interrupt & Checkpoint
DeerFlow’s planner can pause for human feedback. When an Interrupt occurs, the entire graph state is persisted so that execution can resume after the user interaction. The CheckPoint mechanism stores the serialized state in a key‑value store.
Deer‑Go implements a simple in‑memory CheckPointStore called DeerCheckPoint:
type DeerCheckPoint struct { buf map[string][]byte }
func (dc *DeerCheckPoint) Get(ctx context.Context, checkPointID string) ([]byte, bool, error) {
data, ok := dc.buf[checkPointID]
return data, ok, nil
}
func (dc *DeerCheckPoint) Set(ctx context.Context, checkPointID string, checkPoint []byte) error {
dc.buf[checkPointID] = checkPoint
return nil
}
var deerCheckPoint = DeerCheckPoint{buf: make(map[string][]byte)}
func NewDeerCheckPoint(ctx context.Context) compose.CheckPointStore { return &deerCheckPoint }During graph compilation the checkpoint store is attached with compose.WithCheckPointStore, and at runtime the same checkpoint ID is supplied via compose.WithCheckPointID so that multiple requests share the same persisted state.
r, err := g.Compile(ctx,
compose.WithGraphName("EinoDeer"),
compose.WithCheckPointStore(model.NewDeerCheckPoint(ctx)),
)
_, err = r.Invoke(ctx, consts.Coordinator, compose.WithCheckPointID(req.ThreadID))Integration with Hertz & SSE
Deer‑Go uses the Hertz framework for HTTP handling and the SSE package ( github.com/cloudwego/hertz/pkg/protocol/sse) to stream node outputs to the front‑end. A LoggerCallback implements the five callback timings (OnStart, OnEnd, OnError, OnStartWithStreamInput, OnEndWithStreamOutput). The most important method, OnEndWithStreamOutput, reads frames from the stream and forwards them to the SSE writer.
func (cb *LoggerCallback) OnEndWithStreamOutput(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[callbacks.CallbackOutput]) context.Context {
msgID := util.RandStr(20)
go func() {
defer output.Close()
for {
frame, err := output.Recv()
if errors.Is(err, io.EOF) { break }
if err != nil { ilog.EventError(ctx, err, "[OnEndStream] recv_error"); return }
switch v := frame.(type) {
case *schema.Message:
_ = cb.pushMsg(ctx, msgID, v)
case *ec_model.CallbackOutput:
_ = cb.pushMsg(ctx, msgID, v.Message)
case []*schema.Message:
for _, m := range v { _ = cb.pushMsg(ctx, msgID, m) }
case string:
// control‑hand‑off signal
default:
ilog.EventInfo(ctx, "frame_type", "type", "unknown", "v", v)
}
}
}()
return ctx
}Running Deer‑Go
Clone the repository, install dependencies, and build with Go 1.23:
git clone https://github.com/cloudwego/eino-examples.git
cd eino-examples/flow/agent/deer-go
go mod tidyCopy the example configuration file, fill in the required keys, and start the service:
cp ./conf/deer-go.yaml.1 ./conf/deer-go.yaml
./run.sh # compile and run locally
./run.sh -s # run as a server that can be used with the DeerFlow front‑endVolcano Engine Developer Services
The Volcano Engine Developer Community, Volcano Engine's TOD community, connects the platform with developers, offering cutting-edge tech content and diverse events, nurturing a vibrant developer culture, and co-building an open-source ecosystem.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
