From Prompt to Multi‑Agent: How LLMs Evolve into Autonomous Agents

Since ChatGPT's debut, the LLM landscape has progressed through four stages—prompt engineering, chain orchestration, autonomous agents, and multi‑agent systems—each enhancing intelligence and automation, with this article detailing their evolution, advantages, drawbacks, and practical implementation examples in Go.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
From Prompt to Multi‑Agent: How LLMs Evolve into Autonomous Agents

LLM Application Evolution Overview

Since ChatGPT appeared, the industry has explored how to bring large language models (LLMs) into real‑world applications. This article summarizes the evolutionary process from simple prompt engineering to chain orchestration, then to autonomous agents, and finally to multi‑agent systems, highlighting the strengths and weaknesses of each stage.

1. What Is an Agent?

In the LLM domain, an agent is an intelligent entity that can perceive, remember, plan, and use tools autonomously, making decisions without human intervention. Technically, an agent is a technique that enhances a large model’s capabilities, enabling it to learn, improve, and achieve goals in specific tasks.

2. Why Did Agents Appear?

Agents emerged to improve the automation and intelligence of LLM applications. The automation capability can be divided into four levels:

Prompt stage – humans write prompts manually to obtain answers.

Chain orchestration stage – fixed pipelines combine LLMs with tools (e.g., RAG) to handle specific tasks.

Agent stage – agents plan and use tools automatically to achieve goals.

Multi‑Agent stage – multiple specialized agents cooperate, improving stability and intelligence.

3. Evolution Stages

3.1 Prompt Stage

This is the earliest stage where users directly write prompts to activate LLM intelligence. Representative works include role‑playing prompts and function calls. While it reveals LLM capabilities, it cannot be combined with other domains.

3.2 Chain Orchestration Stage

Fixed pipelines let LLMs interact with various tools, such as RAG (retrieval‑augmented generation). The approach offers stability and efficiency but limits the model’s flexibility because the workflow is predetermined.

3.3 Agent Stage

Agents use a planner + executor architecture (e.g., ReAct) so the LLM can think, plan, and invoke tools autonomously. Open‑source projects like AutoGPT demonstrate this paradigm, turning LLMs from "+AI" to "AI+". Drawbacks include heavy model burden and risk of infinite loops.

3.4 Multi‑Agent Stage

Multi‑agent systems treat a single agent as a group of specialized experts that cooperate. This leverages the principle "let specialists handle specialized tasks" and improves both intelligence and stability. Interaction patterns include cooperative (ordered or unordered) and adversarial collaborations.

4. Practical Implementation

4.1 Implementing a Single Agent (Go example)

// BaseAgent 基础agent,设定人设和可以调用的工具,它将会进行思考,解决目标问题
type BaseAgent struct {
    maxIterateTimes int
    llm *proxy.LLM
    rolePrompt string
    Tools []Tool
    steps []AgentStep
}

func NewBaseAgent(rolePrompt string, tools []Tool, maxIterateTimes int, llm *proxy.LLM) *BaseAgent {
    if rolePrompt == "" { rolePrompt = planner }
    if maxIterateTimes == 0 { maxIterateTimes = len(tools) + 1 }
    return &BaseAgent{maxIterateTimes: maxIterateTimes, llm: llm, rolePrompt: rolePrompt, Tools: tools}
}

func (agent *BaseAgent) think(ctx context.Context, query string) {
    // core loop: generate prompt, call LLM, parse action, execute tool, repeat until answer or max iterations
}

func (agent *BaseAgent) doAction(ctx context.Context, action AgentAction) {
    // invoke the selected tool and record observation
}

func (agent *BaseAgent) constructScratchPad() string {
    // build history for LLM context
    return scratchPad
}

The code demonstrates the planner‑executor loop, prompt formatting, action parsing, and tool execution.

4.2 Multi‑Agent Communication

A controller (often a state machine) decides which agent acts next, updates the shared environment, and routes messages. Interaction can be LLM‑driven or rule‑based. The diagram below shows a typical architecture:

Multi‑Agent Architecture Diagram
Multi‑Agent Architecture Diagram

5. Summary

The LLM application journey has moved from simple prompts to sophisticated agents, shifting from "+AI" to "AI+". While agents greatly increase automation and generality, challenges such as model burden and response latency remain. Multi‑agent designs offer a promising path to scale intelligence horizontally.

LLMGoAgentMulti-agent
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.