Three Design Patterns for Multi‑Agent Permission Isolation: Assigning Dedicated Toolsets
The article explains three architectural patterns—static binding, dynamic injection, and tool‑level guards—for isolating tool permissions in production‑grade multi‑agent LLM systems, compares their trade‑offs, shows concrete code examples, and highlights common pitfalls and best‑practice recommendations.
Why Tool Isolation Matters
In a production multi‑agent system there are three roles: ResearchAgent (read‑only), WriterAgent (write‑only) and AdminAgent (read + write + delete). Sharing a single tools array gives every agent root database access, so a hallucinating LLM or a prompt‑injection can cause unintended actions.
Tool isolation follows RBAC: the role determines the toolset, the toolset defines the boundary.
Pattern 1 – Static Binding (createReactAgent)
Assign a dedicated array of tools when the agent is created.
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const llm = new ChatOpenAI({ model: "gpt-4o" });
const readTools = [
tool(async ({ query }) => `Search results for ${query}...`, {
name: "web_search",
description: "Search public web information",
schema: z.object({ query: z.string().describe("search keyword") })
}),
tool(async ({ docId }) => `Document content: ${docId}`, {
name: "read_document",
description: "Read a document from the DB",
schema: z.object({ docId: z.string().describe("document ID") })
})
];
const writeTools = [
tool(async ({ title, content }) => `Saved article: ${title}`, {
name: "save_article",
description: "Save generated article to DB",
schema: z.object({
title: z.string().describe("article title"),
content: z.string().describe("article content")
})
})
];
const adminTools = [
...readTools,
...writeTools,
tool(async ({ docId }) => `Deleted document: ${docId}`, {
name: "delete_document",
description: "Permanently delete a document (admin only)",
schema: z.object({ docId: z.string().describe("document ID to delete") })
})
];
const researchAgent = createReactAgent({ llm, tools: [...readTools] });
const writerAgent = createReactAgent({ llm, tools: [...writeTools] });
const adminAgent = createReactAgent({ llm, tools: [...adminTools] });Advantages: simple, explicit mapping of role to capabilities.
Limitation: the toolset is fixed at creation time and cannot adapt to runtime state.
Pattern 2 – Dynamic Injection (State‑Driven)
Tools are assembled at each node execution based on a permissionLevel stored in the graph state.
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({ model: "gpt-4o" });
const AgentState = Annotation.Root({
messages: Annotation<HumanMessage[]>({
reducer: (curr, upd) => [...curr, ...upd],
default: () => []
}),
permissionLevel: Annotation<"read-only" | "read-write" | "admin">({
reducer: (_, upd) => upd,
default: () => "read-only"
})
});
function getToolsForPermission(level) {
const baseTools = [webSearchTool, readDocumentTool];
if (level === "read-only") return baseTools;
if (level === "read-write") return [...baseTools, saveArticleTool];
return [...baseTools, saveArticleTool, deleteDocumentTool];
}
async function dynamicAgentNode(state) {
const tools = getToolsForPermission(state.permissionLevel);
const llmWithTools = llm.bindTools(tools); // cheap
const response = await llmWithTools.invoke(state.messages);
return { messages: [response] };
}
const graph = new StateGraph(AgentState)
.addNode("agent", dynamicAgentNode)
.addEdge("__start__", "agent")
.compile();
const result = await graph.invoke({
messages: [new HumanMessage("Help me write and save an article.")],
permissionLevel: "read-write"
});Advantages: flexible, supports scenarios where permissions change (e.g., free vs. paid stages).
Limitation: no built‑in audit capability.
Pattern 3 – Tool Guard (Internal Authorization)
Each tool checks the caller’s permissions before executing.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { RunnableConfig } from "@langchain/core/runnables";
interface AgentContext { agentId: string; permissions: string[]; userId: string; }
const guardedDeleteTool = tool(async ({ docId }, config: RunnableConfig) => {
const ctx = config?.configurable?.agentContext as AgentContext | undefined;
if (!ctx) throw new Error("Missing Agent context for permission check");
if (!ctx.permissions.includes("delete")) {
throw new Error(`Agent "${ctx.agentId}" lacks delete permission. Current: ${ctx.permissions.join(", ")}`);
}
console.log(`[AUDIT] ${ctx.userId} via ${ctx.agentId} → delete ${docId}`);
return `Document ${docId} has been deleted`;
}, {
name: "delete_document",
description: "Permanently delete a document (requires delete permission)",
schema: z.object({ docId: z.string() })
});
const result = await graph.invoke(
{ messages: [new HumanMessage("Delete document doc-123")] },
{ configurable: { agentContext: { agentId: "writer-agent", permissions: ["read", "write"], userId: "user-456" } } }
);Advantages: provides audit logs and fine‑grained control; works well with shared tool libraries.
Comparison of the Three Patterns
Implementation complexity : static binding – simple; dynamic injection – medium; tool guard – high.
Isolation timing : agent creation; runtime (each step); tool call.
Typical scenario : fixed roles; permissions change with state; shared tools + audit needs.
Audit capability : only tool guard provides it.
Performance overhead : static binding lowest; dynamic injection medium; tool guard lowest because guard checks only.
Testability : all three highly testable; dynamic injection benefits from a pure‑function tool factory.
Selection Guidance
~90 % of cases: use static binding – clear roles, minimal code.
When permissions must evolve (e.g., free vs. paid features): adopt dynamic injection.
When audit logs are required (finance, healthcare, compliance): employ tool guards.
Combine static binding as a first line of defense with tool guards as a second line for doubled safety.
Full‑Stack Example – Content‑Production Multi‑Agent System
The three patterns are combined to build a realistic workflow: ResearchAgent → WriterAgent → ReviewAgent.
import { StateGraph, Annotation, END, START } from "@langchain/langgraph";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage, BaseMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
const researchAgent = createReactAgent({ llm, tools: [webSearchTool, readDocTool] });
const writerAgent = createReactAgent({ llm, tools: [webSearchTool, readDocTool, saveDraftTool] });
const reviewAgent = createReactAgent({ llm, tools: [readDocTool, saveDraftTool, publishArticleTool] });
const ContentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (c, u) => [...c, ...u],
default: () => []
}),
currentStep: Annotation<"research" | "write" | "review" | "done">({
reducer: (_, u) => u,
default: () => "research"
}),
researchResult: Annotation<string>({ reducer: (_, u) => u, default: () => "" }),
draftId: Annotation<string>({ reducer: (_, u) => u, default: () => "" })
});
async function researchNode(state) {
const result = await researchAgent.invoke({ messages: state.messages });
const lastMsg = result.messages.at(-1) as AIMessage;
return { messages: result.messages, researchResult: lastMsg.content as string, currentStep: "write" };
}
async function writerNode(state) {
const msgs = [...state.messages, new HumanMessage(`Write an article based on research result:
${state.researchResult}`)];
const result = await writerAgent.invoke({ messages: msgs });
const lastMsg = result.messages.at(-1) as AIMessage;
const draftMatch = (lastMsg.content as string).match(/draft-\d+/);
return { messages: result.messages, draftId: draftMatch?.[0] ?? "", currentStep: "review" };
}
async function reviewNode(state) {
const msgs = [...state.messages, new HumanMessage(`Review draft ${state.draftId} and publish if correct.`)];
const result = await reviewAgent.invoke({ messages: msgs });
return { messages: result.messages, currentStep: "done" };
}
const contentGraph = new StateGraph(ContentState)
.addNode("research", researchNode)
.addNode("write", writerNode)
.addNode("review", reviewNode)
.addEdge(START, "research")
.addEdge("research", "write")
.addEdge("write", "review")
.addEdge("review", END)
.compile();
const output = await contentGraph.invoke({
messages: [new HumanMessage("Write a technical article about LangGraph permission isolation")]
});The data flow is: Research (read‑only) → Writer (read‑write draft) → Review (publish). Each agent stays within its designated boundary.
Common Pitfalls
Pitfall 1 – Shared tools array reference : Using a single array and passing it to multiple agents creates a shared mutable reference. Fix by spreading the array (e.g., tools: [...readTools]) so each agent gets its own copy.
Pitfall 2 – Overly generic tool descriptions : Vague descriptions cause the LLM to misuse tools. Provide precise “action + object + consequence” descriptions such as
{ name: "delete_document", description: "Permanently delete a document (admin only)" }.
Pitfall 3 – Missing ToolNode when manually binding tools : If bindTools is used without adding a ToolNode to the graph, the LLM will generate tool‑call requests but never execute them.
Conclusion
Static binding is the foundation and covers the majority of scenarios.
Dynamic injection adds flexibility for state‑driven permission changes.
Tool guards provide the final defense and audit logs for high‑risk environments.
Clear tool descriptions act as an invisible safety net.
Combining static binding with tool guards yields a two‑layer isolation strategy.
Watch out for the three high‑frequency pitfalls: shared array reference, vague descriptions, and omitted ToolNode.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
James' Growth Diary
I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
