How to Build an AI-Powered Code Review System with Node.js and LLMs

This article walks through designing and implementing an AI code review tool using Node.js, GitLab webhooks, and large language models, covering prompt engineering, diff augmentation, token management, response parsing, and automated comment posting to streamline the review process.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
How to Build an AI-Powered Code Review System with Node.js and LLMs

1. Introduction

Following a previous guide on AI for front‑end developers, this tutorial focuses on building a practical AI Review system with Node.js and a large language model (LLM). Readers will learn how to design the application, craft prompts, and let the AI analyze code diffs.

2. What You Will Gain

Design thinking for AI applications

Prompt engineering techniques

Using Node.js together with an LLM to analyze code

3. Background

Code review is often neglected when teams are short‑handed or under tight deadlines. The article proposes delegating this repetitive, detail‑oriented task to an “AI employee”.

4. Overall Effect

In the author’s team, AI‑driven review is already used across 20+ front‑end and back‑end projects, catching subtle security, performance, and logic issues that humans sometimes miss.

4.1 Comment Mode

The AI adds comments directly under problematic code lines, indicating the issue type and reason.

4.2 Report Mode

The AI generates a review report listing all issues, their locations, and detailed explanations.

5. Design Analysis

The core challenges are:

Node service detecting GitLab MR events

Fetching diff for each changed file

Crafting prompts that guide the LLM to produce structured output

Parsing the LLM response and handling errors

Posting comments back to GitLab

Pushing status updates to enterprise messaging platforms

These challenges are addressed step by step.

5.1 Create Project

Initialize a NestJS project (any Node framework works). Example commands:

nvm use 20
nest new mr-agent

5.2 Implement Webhook Interface

Configure a GitLab webhook to POST MR events to http://example.com/webhook/trigger. The controller parses the request body and headers.

Request Body

object_type/object_kind : event type (e.g., merge, push)

project : repository information

object_attributes : MR details such as source/target branches

user : submitter info

Custom Headers

x-ai-mode : comment or report mode

x-push-url : URL for pushing status to enterprise messengers

x-gitlab-token : access token for GitLab API calls

5.3 Get Diff Content

Use GitLab’s API to retrieve the full diff for each file, then filter out non‑code files (e.g., package.json).

5.4 Prompt Design

Define a system prompt that tells the LLM to act as a “MR Review expert”. The prompt includes role definition, input format, output schema (YAML), and examples.

你是一个代码 MR Review 专家,你的任务是评审 Git Merge Request 中提交的代码,如果存在有问题的代码,你要提供有价值、有建设性值的建议。

Output schema (simplified):

interface Review { newPath: string; oldPath: string; type: 'old' | 'new'; startLine: number; endLine: number; issueHeader: string; issueContent: string; } interface MRReview { reviews: Review[]; }

5.5 Extend and Assemble Diff

Augment the raw diff with file paths and line numbers, e.g.:

## new_path: src/agent/agent.service.ts
## old_path: src/agent/agent.service.ts
@@ -1,16 +1,13 @@
(1,1) import { Injectable } from '@nestjs/common';
( ,8) +type InputProps = Record<string, any>;

Processing includes splitting hunks, calculating old/new line counters, and re‑assembling the diff into a single string for the LLM.

5.6 Connect to LLM

The demo uses DeepSeek‑v3 (alternatives: GPT‑4.1, Claude). API call parameters include model, messages (system prompt + diff), and a low temperature (0.2) for deterministic output.

5.7 Data Parsing and Exception Handling

Parse the YAML response with js-yaml. Handle common LLM quirks such as stray newline characters in fields or mis‑aligned indentation that break YAML parsing.

5.8 Context Splitting

When the combined token count exceeds the model’s limit, split the diff into multiple chunks, compute token usage with @dqbd/tiktoken, and call the LLM in parallel.

const encoding = encoding_for_model(this.modelName);
const tokens = encoding.encode(text);
const count = tokens.length;
encoding.free();

5.9 Send Results

After obtaining structured review data, call GitLab’s comment API to post inline comments or a summary report.

6. Conclusion

6.1 Expectation

The author encourages readers to try the AI Review system in their own projects and share feedback.

6.2 Learning Method

If any part is unclear, ask an AI assistant for clarification; using AI to learn AI is recommended.

AI编程资讯AI Coding专区指南:https://aicoding.juejin.cn/aicoding
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AILLMNode.jscode reviewGitLab
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.