How to Build a Code Review Agent from Scratch Using Claude Agent SDK (Part 1)

This tutorial walks through creating a full‑featured code‑review Agent with Claude Agent SDK, covering installation, TypeScript setup, the SDK‑managed agent loop, structured JSON output, permission handling, sub‑agents, session management, hooks, custom MCP tools, cost tracking, and a production‑grade example.

AI Tech Publishing
AI Tech Publishing
AI Tech Publishing
How to Build a Code Review Agent from Scratch Using Claude Agent SDK (Part 1)

In this first article of the "Deep Hand‑crafted Agent" series, the author introduces the goal of building a code‑review Agent from the ground up using Claude Agent SDK, a library that wraps Claude Code as the runtime and exposes common agent capabilities such as the agent loop, built‑in tools, and context management.

The target Agent can analyse a codebase, detect bugs, security issues, performance problems and provide structured feedback, while also tracking progress and cost.

Technical stack : Claude Code CLI (runtime), @anthropic‑ai/claude‑agent‑sdk (SDK), TypeScript, and Claude Opus 4.5 (or compatible MiniMax‑M2.5 / Kimi‑K2.5 models).

The SDK eliminates the tedious manual loop required when using a raw LLM API. The article contrasts the verbose manual loop with the concise SDK call:

// Manual loop (simplified)
let response = await client.messages.create(...);
while (response.stop_reason === "tool_use") {
  const result = await yourToolExecutor(response.tool_use);
  response = await client.messages.create({tool_result: result, ...});
}

// SDK version
for await (const message of query({prompt: "Fix the bug in auth.py", options: {model: "opus", allowedTools: ["Read", "Glob"]}})) {
  console.log(message);
}

The SDK also ships with a complete toolbox (Read, Write, Edit, Bash, Glob, Grep, WebSearch, WebFetch) that can be used without any additional implementation.

Installation & setup :

npm install -g @anthropic-ai/claude-code
claude   # run the CLI and authenticate
mkdir code-review-agent && cd code-review-agent
npm init -y
npm install @anthropic-ai/claude-agent-sdk
npm install -D typescript @types/node tsx

First Agent (agent.ts) demonstrates a simple query that lists files in the current directory:

import { query } from "@anthropic-ai/claude-agent-sdk";
async function main() {
  for await (const message of query({
    prompt: "What files are in this directory?",
    options: {model: "opus", allowedTools: ["Glob"], maxTurns: 50}
  })) {
    if (message.type === "assistant") {
      for (const block of message.message.content) {
        if ("text" in block) console.log(block.text);
      }
    }
  }
}
main();

Running npx tsx agent.ts shows Claude using the Glob tool to list files.

Testing the Agent creates an example.ts file with intentional bugs (off‑by‑one loop, missing null checks, logging sensitive data) and runs the review, confirming that Claude identifies each issue and suggests fixes.

Structured JSON output is enabled by supplying a JSON‑Schema to the SDK, allowing the final result to be parsed programmatically:

const reviewSchema = {
  type: "object",
  properties: {
    issues: {type: "array", items: {type: "object", properties: {severity: {type: "string", enum: ["low","medium","high","critical"]}, category: {type: "string", enum: ["bug","security","performance","style"]}, file: {type: "string"}, line: {type: "number"}, description: {type: "string"}, suggestion: {type: "string"}}, required: ["severity","category","file","description"]}},
    summary: {type: "string"},
    overallScore: {type: "number"}
  },
  required: ["issues","summary","overallScore"]
};

await query({
  prompt: "Review the code …",
  options: {model: "opus", allowedTools: ["Read","Glob","Grep"], maxTurns: 50, outputFormat: {type: "json_schema", schema: reviewSchema}}
});

Permission handling can be set to "default" (prompt for approval), "acceptEdits" (auto‑approve file edits), or "bypassPermissions" (no prompts). For fine‑grained control the canUseTool callback can allow or deny specific tool calls, e.g., blocking rm -rf or sudo commands.

Sub‑agents are defined via the agents option, enabling delegation to specialised agents such as a security‑reviewer (model "sonnet") or a test‑analyzer (model "haiku") for focused analyses.

Session management captures the session_id from the initial system message, allowing later queries to resume the same conversation with the resume option.

Hooks let developers intercept tool usage. The example defines an auditLogger that logs every tool invocation and a blockDangerousCommands hook that denies Bash commands containing rm -rf or sudo.

Custom tools via MCP are created with createSdkMcpServer and tool, exposing a new tool analyze_complexity that returns a random cyclomatic complexity value. The Agent can call this tool using the name mcp__code-metrics__analyze_complexity.

Cost tracking is demonstrated by logging message.total_cost_usd, token usage, and per‑model breakdown after a successful run.

Full production‑grade example combines all the pieces into a runCodeReview function that performs a thorough review, outputs a JSON‑schema result, prints a coloured summary, and handles errors. The main entry point runs the review on a directory supplied via CLI.

Limitations & next steps – the SDK currently lacks distributed runtime support, which is required for SaaS‑style deployments. The author promises a future article on building such a runtime.

All code snippets are provided verbatim, and the article includes reference links to the Claude Agent SDK documentation, the GitHub repository, and related resources.

TypeScriptcode reviewhooksAI Agentstructured outputcustom toolsClaude Agent SDK
AI Tech Publishing
Written by

AI Tech Publishing

In the fast-evolving AI era, we thoroughly explain stable technical foundations.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.