Workflow vs Agent: A Beginner’s Guide to AI Agents

This tutorial explains the fundamental differences between AI workflows and autonomous agents, compares their strengths, outlines when to use each approach, and provides concrete LangChain/LangGraph code examples, framework references, and best‑practice recommendations for building reliable LLM‑powered systems.

AI Algorithm Path
AI Algorithm Path
AI Algorithm Path
Workflow vs Agent: A Beginner’s Guide to AI Agents

Definitions

Anthropic classifies both as "Agentic Systems" but distinguishes:

Workflow : a system that coordinates large language models (LLMs) and tools through a predefined code path.

Agent : a system where the LLM dynamically guides its own process and tool usage, autonomously deciding how to accomplish a task.

Choosing Between Workflow and Agent

For most LLM‑based applications start with the simplest solution (retrieval‑augmented generation, context‑example optimization). Introduce agents only when the performance gain justifies higher latency and cost.

Workflow : provides predictability and consistency for well‑structured tasks.

Agent : offers flexibility and large‑model‑driven decision making for dynamic scenarios.

Frameworks

LangGraph (LangChain) – https://langchain-ai.github.io/langgraph/

Amazon Bedrock AI Agent framework – https://aws.amazon.com/cn/bedrock/agents/

Rivet – https://rivet.ironcladapp.com/

Vellum – https://www.vellum.ai/

These frameworks lower the entry barrier but add an abstraction layer that can hide prompts and responses, making debugging harder.

Core Module Construction

Install the required package and configure the API key to use the Qwen2.5 model via ChatOpenAI:

from langchain_openai import ChatOpenAI

if __name__ == "__main__":
    llm = ChatOpenAI(
        model_name = "qwen2.5:7b",
        openai_api_key = "test",
        openai_api_base = "https://localhost:11444/v1",
        temperature = 0
    )
    answer = llm.invoke("who are you?")
    print(answer.content)

Resulting response is shown in the accompanying image.

Prompt‑Chain Workflow

Prompt chains achieve higher accuracy by breaking a complex task into a sequence of simpler LLM calls, at the cost of increased latency.

Typical use cases: generate marketing copy then translate it; draft an outline before writing the full document.

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model_name="qwen2.5:7b", openai_api_key="test", openai_api_base="https://localhost:11444/v1", temperature=0)

# Nodes

def generate_joke(state):
    msg = llm.invoke(f"Write a short joke about {state['topic']}")
    return {"joke": msg.content}

def check_punchline(state):
    if "?" in state["joke"] or "!" in state["joke"]:
        return "Fail"
    return "Pass"

def improve_joke(state):
    msg = llm.invoke(f"Make this joke funnier: {state['joke']}")
    return {"improved_joke": msg.content}

def polish_joke(state):
    msg = llm.invoke(f"Add a surprising twist: {state['improved_joke']}")
    return {"final_joke": msg.content}

workflow = StateGraph(dict)
workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)
workflow.add_edge(START, "generate_joke")
workflow.add_conditional_edges("generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END})
workflow.add_edge("improve_joke", "polish_joke")
workflow.add_edge("polish_joke", END)
chain = workflow.compile()

Parallel Workflow

Parallelization enables simultaneous handling of independent sub‑tasks (chunk processing) or multiple runs of the same task (voting).

from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END

class State(TypedDict):
    topic: str
    joke: str
    story: str
    poem: str
    combined_output: str

llm = ChatOpenAI(model_name="qwen2.5:7b", openai_api_key="test", openai_api_base="https://localhost:11444/v1", temperature=0)

def call_llm_1(state):
    msg = llm.invoke(f"Write a joke about {state['topic']}")
    return {"joke": msg.content}

def call_llm_2(state):
    msg = llm.invoke(f"Write a story about {state['topic']}")
    return {"story": msg.content}

def call_llm_3(state):
    msg = llm.invoke(f"Write a poem about {state['topic']}")
    return {"poem": msg.content}

def aggregator(state):
    combined = f"JOKE:
{state['joke']}

STORY:
{state['story']}

POEM:
{state['poem']}"
    return {"combined_output": combined}

parallel_builder = StateGraph(State)
parallel_builder.add_node("call_llm_1", call_llm_1)
parallel_builder.add_node("call_llm_2", call_llm_2)
parallel_builder.add_node("call_llm_3", call_llm_3)
parallel_builder.add_node("aggregator", aggregator)
parallel_builder.add_edge(START, "call_llm_1")
parallel_builder.add_edge(START, "call_llm_2")
parallel_builder.add_edge(START, "call_llm_3")
parallel_builder.add_edge("call_llm_1", "aggregator")
parallel_builder.add_edge("call_llm_2", "aggregator")
parallel_builder.add_edge("call_llm_3", "aggregator")
parallel_builder.add_edge("aggregator", END)
parallel_workflow = parallel_builder.compile()

Coordinator‑Worker Pattern

In a coordinator‑worker workflow, the central coordinator (usually an LLM) dynamically breaks down a task, assigns sub‑tasks to multiple worker models, and integrates their outputs.
from langgraph.constants import Send

class State(TypedDict):
    topic: str
    sections: list
    completed_sections: list
    final_report: str

class WorkerState(TypedDict):
    section: dict
    completed_sections: list

# Orchestrator generates a plan

def orchestrator(state):
    plan = planner.invoke([SystemMessage(content="Generate a plan for the report."), HumanMessage(content=f"Topic: {state['topic']}")])
    return {"sections": plan.sections}

# Worker writes a section

def llm_call(state):
    msg = llm.invoke([SystemMessage(content="Write a report section."), HumanMessage(content=f"Name: {state['section'].name}, Desc: {state['section'].description}")])
    return {"completed_sections": [msg.content]}

# Synthesizer combines sections

def synthesizer(state):
    combined = "

---

".join(state["completed_sections"])
    return {"final_report": combined}

builder = StateGraph(State)
builder.add_node("orchestrator", orchestrator)
builder.add_node("llm_call", llm_call)
builder.add_node("synthesizer", synthesizer)
builder.add_edge(START, "orchestrator")
builder.add_conditional_edges("orchestrator", lambda s: [Send("llm_call", {"section": sec}) for sec in s["sections"]], ["llm_call"])
builder.add_edge("llm_call", "synthesizer")
builder.add_edge("synthesizer", END)
workflow = builder.compile()

Evaluator‑Optimizer Pattern

This pattern uses two models: a generator and an evaluator that provides feedback for iterative improvement.

class Feedback(BaseModel):
    grade: Literal["funny", "not funny"]
    feedback: str

evaluator = llm.with_structured_output(Feedback)

# Generator

def llm_call_generator(state):
    msg = llm.invoke(f"Write a joke about {state['topic']}" + (f" but consider: {state['feedback']}" if state.get('feedback') else ""))
    return {"joke": msg.content}

# Evaluator

def llm_call_evaluator(state):
    grade = evaluator.invoke(f"Grade the joke: {state['joke']}")
    return {"funny_or_not": grade.grade, "feedback": grade.feedback}

# Routing based on evaluation

def route_joke(state):
    return "Accepted" if state["funny_or_not"] == "funny" else "Rejected + Feedback"

builder = StateGraph(dict)
builder.add_node("generator", llm_call_generator)
builder.add_node("evaluator", llm_call_evaluator)
builder.add_edge(START, "generator")
builder.add_edge("generator", "evaluator")
builder.add_conditional_edges("evaluator", route_joke, {"Accepted": END, "Rejected + Feedback": "generator"})
workflow = builder.compile()

Agent Basics

An AI agent is typically an LLM that runs in a loop, using tool calls to interact with its environment.

@tool
def multiply(a: int, b: int) -> int:
    """Multiply a and b."""
    return a * b

@tool
def add(a: int, b: int) -> int:
    """Add a and b."""
    return a + b

@tool
def divide(a: int, b: int) -> float:
    """Divide a by b."""
    return a / b

tools = [add, multiply, divide]
llm_with_tools = llm.bind_tools(tools)

Agent loop implementation:

def llm_call(state):
    return {"messages": [llm_with_tools.invoke([SystemMessage(content="You are a helpful assistant performing arithmetic."), *state["messages"]])]

def tool_node(state):
    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

def should_continue(state):
    last = state["messages"][-1]
    return "Action" if last.tool_calls else END

agent_builder = StateGraph(MessagesState)
agent_builder.add_node("llm_call", llm_call)
agent_builder.add_node("environment", tool_node)
agent_builder.add_edge(START, "llm_call")
agent_builder.add_conditional_edges("llm_call", should_continue, {"Action": "environment", END: END})
agent_builder.add_edge("environment", "llm_call")
agent = agent_builder.compile()

messages = [HumanMessage(content="Add 3 and 4.")]
result = agent.invoke({"messages": messages})
for m in result["messages"]:
    m.pretty_print()

Integrating Agents and Workflows

Agents provide autonomous decision‑making; agentic workflows coordinate multiple agents into an efficient pipeline. Example: in a smart‑factory, agents monitor equipment, predict failures, and schedule production, while the workflow orchestrates procurement, quality inspection, and logistics.

AI Agent layer : real‑time monitoring, fault prediction, production scheduling.

Agentic Workflow layer : end‑to‑end coordination of material sourcing, manufacturing, quality control, and distribution; dynamic re‑planning during demand spikes.

Conclusion

Success in LLM‑based systems comes from building the right solution for the problem, not from maximal architectural complexity. Start with simple prompts and RAG; only adopt multi‑step agents when simpler approaches fall short.

Design simplicity : keep agent logic clear and minimal.

Process transparency : expose planning steps so decisions are observable.

Interface rigor : provide thorough tool documentation and tests to ensure reliability.

While ready‑made frameworks accelerate prototyping, production systems often benefit from reducing abstraction and using foundational components directly.

AI agentsLangChainparallel processingLLM workflowsPrompt Chainingagentic design
AI Algorithm Path
Written by

AI Algorithm Path

A public account focused on deep learning, computer vision, and autonomous driving perception algorithms, covering visual CV, neural networks, pattern recognition, related hardware and software configurations, and open-source projects.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.