How the New PEV Agent Pattern Boosts Reliable LLM Automation in Go
The article introduces the Plan‑Execute‑Verify (PEV) agent pattern added to langgraphgo, explains its three‑stage workflow, core features, configuration, concrete Go examples, implementation details, comparisons with ReAct and Reflection, and discusses best practices, limitations, and trade‑offs for high‑risk automation.
PEV Overview
PEV (Plan‑Execute‑Verify) is a three‑stage agent architecture that decomposes a user request into concrete steps (Plan), runs each step with an available tool (Execute), and validates the result (Verify). If verification fails, the agent automatically re‑plans using the failure context, creating a quality‑assurance checkpoint before synthesis.
Architecture Diagram
┌─────────────────────────────────────────────────────────────┐
│ PEV Workflow │
└─────────────────────────────────────────────────────────────┘
User Request
↓
┌─────────┐
│ Planner │───────┐ (Re‑plan on failure)
└─────────┘ │
↓ │
┌──────────┐ │
│ Executor │ │
└──────────┘ │
↓ │
┌──────────┐ │
│ Verifier │ │
└──────────┘ │
↓ │
Verification │
Successful? │
├─No───────┘
│
Yes
↓
┌─────────────┐
│ Synthesizer │
└─────────────┘
↓
Final AnswerCore Features
Self‑correction : automatically retries failed operations with an improved plan.
Stepwise verification : validates each execution before proceeding.
Error recovery : learns from failures to generate better plans.
Configurable retries : maximum retry count is user‑set.
Tool‑agnostic : works with any implementation of the tools.Tool interface.
State Model
{
"messages": []llms.MessageContent, // conversation history
"plan": []string, // current execution plan
"current_step": int, // index of the step being processed
"last_tool_result": string, // result of the most recent tool call
"intermediate_steps": []string, // history of all steps
"retries": int, // current retry count
"verification_result": VerificationResult, // result of the last verification
"final_answer": string // synthesized final response
}Configuration Struct
type PEVAgentConfig struct {
Model llms.Model // LLM used for planning and verification
Tools []tools.Tool // Available tools for execution
MaxRetries int // Maximum retry attempts (default: 3)
SystemMessage string // Optional custom planner prompt
VerificationPrompt string // Optional custom verifier prompt
Verbose bool // Enable detailed logging
}Example 1 – Simple Calculation
config := prebuilt.PEVAgentConfig{
Model: model,
Tools: []tools.Tool{CalculatorTool{}},
MaxRetries: 3,
Verbose: true,
}
agent, _ := prebuilt.CreatePEVAgent(config)Query: "Calculate 15 multiplied by 8"
Plan: "Multiply 15 by 8"
Execute: call CalculatorTool → "120.00"
Verify: ✅ success
Synthesize: "The result is 120"
Example 2 – Unreliable Weather API
config := prebuilt.PEVAgentConfig{
Model: model,
Tools: []tools.Tool{WeatherTool{FailureRate: 0.4}}, // 40% failure probability
MaxRetries: 3,
Verbose: true,
}Query: "What is the weather in Tokyo?"
Plan: "Get Tokyo weather"
Execute: call Weather API → "Error: connection timeout"
Verify: ❌ failure detected
Re‑plan: "Retry Tokyo weather query with correct city name"
Execute: call Weather API → "Tokyo weather: 22°C, clear"
Verify: ✅ success
Synthesize: "Tokyo weather is 22°C, clear"
Example 3 – Multi‑Step Task
config := prebuilt.PEVAgentConfig{
Model: model,
Tools: []tools.Tool{CalculatorTool{}, WeatherTool{FailureRate: 0.2}, DatabaseTool{FailureRate: 0.3}},
MaxRetries: 3,
Verbose: true,
}Query: "First, calculate 25 multiplied by 4. Then, get the weather in Paris."
PEV will generate a plan with two steps, execute each tool, verify each result, and synthesize a combined answer.
Running the Example
Set OpenAI API key: export OPENAI_API_KEY=your-api-key-here Execute the example:
cd examples/pev_agent && go run main.goImplementation Details
Planner Node
Analyzes the user request and breaks it into ordered steps.
If verification fails, receives feedback and creates an improved plan.
Returns a list of step identifiers.
Executor Node
Runs the current step using the appropriate tool.
Gracefully handles tool errors and returns the raw result for verification.
Verifier Node
Uses an LLM to analyze the execution result.
Returns a structured VerificationResult:
type VerificationResult struct {
IsSuccessful bool `json:"is_successful"`
Reasoning string `json:"reasoning"`
}Detects success/failure signals in the tool output.
Synthesizer Node
Aggregates all successful intermediate steps.
Generates a coherent final answer.
Invoked only after all steps succeed or the maximum retry count is reached.
Comparison with Other Patterns
ReAct : no self‑correction, no verification; suited for fast, simple tasks.
Reflection : self‑correction present, verification occurs after generation; aimed at content quality improvement.
PEV : provides both self‑correction and per‑step verification; designed for reliable tool usage.
Best Practices
Design tools to return clear error messages.
Set MaxRetries based on tool reliability.
Enable Verbose during development to understand failure causes.
Customize verification prompts for domain‑specific checks.
Keep step granularity atomic and independently verifiable.
Limitations
Additional verification steps increase latency.
More LLM calls raise operational cost.
Added complexity may be overkill for simple, reliable operations.
Trade‑offs
Advantages : high reliability and accuracy, automatic error recovery, clear audit trail.
Disadvantages : higher cost due to extra LLM calls, slower execution, requires well‑designed tools with explicit success/failure signals.
License
This implementation is part of the langgraphgo project. Source code: https://github.com/smallnest/langgraphgo/blob/master/examples/pev_agent/main.go
Diagram
BirdNest Tech Talk
Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
