How Do Agents Reflect? From Self‑Feedback to External Tool Validation

The article explains how LLM‑based agents implement reflection by first generating output, then evaluating it either through self‑feedback or by invoking external tools, and finally correcting the result, detailing two self‑feedback methods and typical external‑feedback scenarios.

AgentGuide
AgentGuide
AgentGuide
How Do Agents Reflect? From Self‑Feedback to External Tool Validation

Agent Reflection Overview

Agent reflection follows a generate → evaluate → correct loop and is divided into self‑feedback and external feedback.

Self‑Feedback

Self‑feedback lets the model inspect its own output, which is suitable for checking textual consistency, format constraints, style, and similar quality criteria.

Method 1: Single‑Step Reflection

Prompt the model to “generate, then reflect, then modify” within a single call. Benefits: fewer model invocations, coherent context, lower latency. Drawback: generation and reflection share the same call, so the model may repeat the same bias and fail to catch its own mistakes.

Method 2: Two‑Step Self‑Reflection

Split the process into two calls. Example with a writing‑assistant agent:

First call generates an initial draft.

Second call receives the original prompt and the draft, performs a focused review, and returns only the identified problems, reasons for failure, and suggested edits.

The writing agent applies the suggestions to produce the final output.

Adding one extra model call isolates the reflection stage, making error detection easier. A key engineering choice is to let the reflection agent only discover issues rather than rewrite the content, avoiding the introduction of new errors.

External Feedback

When objective correctness is required—e.g., executable code, precise calculations, schema‑compliant JSON, or chart generation—self‑check alone is insufficient. Core workflow: generate a result, pass the result to a real tool for validation, and feed the tool’s output back to the agent.

Example for a code‑generation agent: extract the code block, run it in a sandboxed executor, and if an error occurs, send the error message back to the code agent for analysis and revision.

Structured output validation : use a JSON schema validator to check fields and types.

Complex calculation verification : send formulas to a calculator or code interpreter and return the precise result.

Chart quality checking : execute Matplotlib code to produce an image, then let the model adjust labels, legends, and fonts based on the rendered chart.

Self‑feedback flow diagram
Self‑feedback flow diagram
External feedback flow diagram
External feedback flow diagram
LLMprompt engineeringReflectionAgentexternal-feedbackself-feedback
AgentGuide
Written by

AgentGuide

Share Agent interview questions and standard answers, offering a one‑stop solution for Agent interviews, backed by senior AI Agent developers from leading tech firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.