How to Build a Low‑Intrusion, AI‑Powered Mock Server for Frontend Development

This article presents a comprehensive mock‑data tool for front‑end/back‑end separation that solves request interception, flexible rule matching, business‑semantic data generation via AI, and team‑wide sharing, detailing its architecture, core implementations, code snippets, usage workflow, and future roadmap.

大转转FE
大转转FE
大转转FE
How to Build a Low‑Intrusion, AI‑Powered Mock Server for Frontend Development
In a front‑end/back‑end separated development model, mock data is essential for parallel development. This summary describes a mock tool that solves four core problems: request interception, rule matching, data quality, and team sharing.

Background

Integration blocking : When back‑end APIs are not ready, front‑end developers must wait or hard‑code responses (e.g., if (true) { … }).

Scenario coverage difficulty : Simulating edge cases (exceptions, empty data, pagination) often requires back‑end database changes.

Low mock data quality : Tools like Mock.js generate values such as @cname, @integer that lack business semantics and differ greatly from real data.

Data sharing difficulty : Mock data stored locally cannot be reused on another machine or by a teammate.

Requirements

Non‑intrusive – add via an npm package without modifying business code.

Parameter‑level rule matching – the same endpoint can have multiple scenarios based on query, body, header, cookie, or path parameters.

Automatic generation of business‑semantic data from API documentation.

Team‑wide sharing of mock configurations.

Overall Architecture

┌─────────────────────────────────────────────────────────────┐
│                     Business Project                      │
│   import { mockInit } from '@zz-common/ai_mock'            │
│   mockInit({ rules: ['api.example.com'] })                │
└────────────────────────────┬────────────────────────────────┘
                         │
┌────────────────────────────▼────────────────────────────────┐
│                ai_mock (npm package)                     │
│   XHR/Fetch interception → Rule matching engine → Mock data│
└────────────────────────────┬────────────────────────────────┘
                         │ Dynamic SDK + CustomEvent communication
┌────────────────────────────▼────────────────────────────────┐
│                mock-sdk (visual panel)                    │
│   Request list | Rule management | Monaco editor          │
└────────────────────────────┬────────────────────────────────┘
                         │ HTTP
┌────────────────────────────▼────────────────────────────────┐
│                Node (back‑end service)                    │
│   Fetch API docs → AI generation → Data persistence      │
└─────────────────────────────────────────────────────────────┘

Key design points: ai_mock handles only interception and matching; UI code resides in mock-sdk. mock-sdk is injected via CDN, adding no bundle size.

Both modules communicate through CustomEvent for loose coupling.

Mocking is enabled only in non‑production environments.

Core Implementation 1: Request Interception

Problems

Intercept both XHR and fetch without breaking third‑party listeners (e.g., Sentry).

Read‑only properties readyState, status, responseText cannot be directly assigned.

Sending a real request after interception can cause infinite recursion.

Solution

Monkey‑patch XMLHttpRequest.prototype.send and window.fetch while using a WeakMap to mark real requests and avoid recursion.

// Save original methods
const xhrSendNative = XMLHttpRequest.prototype.send;
const originalFetch = window.fetch;

// WeakMap to mark real requests
const isRealRequest = new WeakMap();

XMLHttpRequest.prototype.send = function (...args) {
  const xhr = this;
  const url = sliceUrlPath(xhr.originRequestUrl);

  // Prevent infinite recursion
  if (isRealRequest.get(xhr)) {
    return xhrSendNative.apply(xhr, args);
  }

  // Check mock rule
  if (mockInterface[url]?.isOpen) {
    const mockResult = getMockData(url, requestData);
    if (mockResult.matched) {
      // 1. Return mock response immediately
      applyMockResponseToXhr(xhr, mockResult.data, mockResult.httpStatusCode);

      // 2. Send real request in background for comparison
      const realXhr = cloneXHR(xhr, false); // no event listeners
      isRealRequest.set(realXhr, true);
      xhrSendNative.apply(realXhr, args);
      return;
    }
  }

  // No match – fall back to original request
  xhrSendNative.apply(this, args);
};

Override read‑only properties using Object.defineProperties:

const applyMockResponseToXhr = (xhr, responseData, statusCode) => {
  const responseText = typeof responseData === 'string' ? responseData : JSON.stringify(responseData);
  Object.defineProperties(xhr, {
    readyState: { get: () => 4, configurable: true },
    status: { get: () => statusCode, configurable: true },
    response: { get: () => responseData, configurable: true },
    responseText: { get: () => responseText, configurable: true }
  });
  xhr.dispatchEvent(new Event('readystatechange'));
  xhr.dispatchEvent(new Event('load'));
  xhr.dispatchEvent(new Event('loadend'));
};

Result: after enabling mock, the business layer receives mock data instantly while a real request runs in the background; the UI panel offers a “Load real data” button to replace mock data with the actual response for fine‑tuning.

Core Implementation 2: Rule Matching Engine

Problem

In real projects, the same endpoint must return different data based on parameters located in query, body, header, cookie, or path.

Data Structures

interface MockRule {
  id: string;
  url: string;
  name: string;
  priority: number; // smaller = higher priority
  enabled: boolean;
  type: 1 | 2 | 3; // 1=local draft, 2=personal cloud, 3=team shared
  paramConditions: ParamCondition[];
  mockData: any;
  httpStatusCode: number;
  delay?: number;
}

interface ParamCondition {
  location: 'query' | 'body' | 'header' | 'cookie' | 'path';
  paramName: string;
  operator: 'equals' | 'notEquals' | 'contains' | 'notContains' | 'greaterThan' | 'lessThan' | 'greaterOrEqual' | 'lessOrEqual';
  value: any;
}

Matching Algorithm

function matchRule(rules, request) {
  const enabledRules = rules
    .filter(rule => rule.enabled)
    .sort((a, b) => a.priority - b.priority);
  const matched = enabledRules.find(rule => isRuleMatched(rule, request));
  return matched
    ? { matched: true, rule: matched, delay: matched.delay ?? 0 }
    : { matched: false };
}

function isRuleMatched(rule, request) {
  if (!rule.paramConditions?.length) return true;
  return rule.paramConditions.every(condition => matchParamCondition(condition, request));
}
matchParamCondition

obtains the parameter value based on location (query, body, header, cookie, path) and compares it using the specified operator, with type‑coercion (e.g., 1 equals "1").

Example

Request ?page=1 matches rule query.page equals 1 → returns first page data.

Request ?page=2 matches rule query.page equals 2 → returns second page data.

Request body {"user":{"type":"vip"}} matches rule body.user.type equals "vip" → returns VIP data.

Header X-Debug: true matches rule header.X-Debug equals "true" → returns debug data.

Core Implementation 3: AI‑Generated High‑Quality Data

Problem

Mock.js generates data without business semantics (e.g., {name: "xxx", age: 82, status: 3}), while real APIs often have enumerated status codes and domain‑specific fields.

Approach

The back‑end service fetches the JSON Schema from the API‑doc platform and constructs a prompt for an LLM to generate data that respects field descriptions, enums, and naming.

const systemPrompt = `
You are a mock data generation expert. Generate business‑semantic data based on the provided JSON Schema.

【Generation Rules - Sorted by Priority】
1. If a description contains enum explanations (e.g., "0: success, 1: failure"), use the enum values directly.
2. If no description, generate reasonable values based on the field name.
3. Arrays default to 5 items.
4. If an enum has many values, the array should cover all of them.

【Default Values】
- respCode / code: success → 0, failure → -1
- errorMsg: success → null, failure → "System error"
- Image URL: use a placeholder image URL
- Normal URL: use a generic domain URL

【Output Format】
Only output JSON without any explanatory text.
`;

Key design points:

Description priority : Use enum descriptions like status: 0=待审核, 1=已通过 first.

Field name semantics : Generate Chinese names for userName, realistic prices for price, etc.

Enum coverage : Arrays aim to cover all enum values for thorough testing.

Default success scenario : Unless specified, generate normal (successful) data.

Data Validation

AI‑generated JSON may be malformed; jsonrepair fixes it, then the result is compared against the schema, and missing fields trigger alerts.

const jsonMatch = result.match(/\{[\s\S]*\}/);
if (jsonMatch) {
  return jsonrepair(jsonMatch[0]);
}

Generation Modes

Full generation : Generate all fields based on the complete schema.

Selection generation : Replace only selected fields, keeping others unchanged.

Custom prompt : Users can add extra requirements such as "generate VIP user data" or "price between 100‑500".

Core Implementation 4: Three‑Layer Scope

Problem

Local mock data is not portable across devices or teammates.

Design

Local draft : Stored in IndexedDB, visible only on the current device, used for temporary debugging.

Personal cloud : Stored in a remote database, visible only to the creator, enables cross‑device sync.

Team shared : Stored in a remote database, visible to all team members, provides standard test data.

Key points:

Priority and enabled state are maintained locally to avoid interference.

Cloud storage only keeps the expectation content; each user controls which expectations are enabled and their order.

Rules are cached in window.__mockRulesCache after loading to avoid repeated IndexedDB reads.

Event Communication

mock-request-end

(SDK → UI): reports request completion. mock-interface-switch (UI → SDK): toggles a single interface. mock-rules-updated (UI → SDK): notifies rule updates.

Data Cleanup

Automatically remove data that has not been used for 30 days to prevent storage bloat:

await cleanupInactiveData(30);

Dynamic Template Syntax

Mock data can reference request parameters, enabling responses that vary with the request: {{Date.now()}} – current timestamp. {{uuid()}} – generate UUID. {{request.query.xxx}} – get URL query parameter. {{request.body.xxx}} – get request body field. {{request.headers.xxx}} – get request header.

Example payload:

{
  "code": 0,
  "data": {
    "requestId": "{{uuid()}}",
    "timestamp": "{{Date.now()}}",
    "userId": "{{request.body.userId}}",
    "page": "{{request.query.page}}"
  }
}

Quick Start

// 1. Install
npm install @zz-common/ai_mock

// 2. Initialize
import { mockInit } from '@zz-common/ai_mock';

mockInit({
  rules: ['api.example.com'], // domains to intercept
  excludeRules: [/static/, /cdn/] // resources to ignore
});

Recommended Workflow

Integrate the npm package and enable interception in non‑production environments.

From a real request, click “Create expectation” in the panel to generate a mock.

Configure parameter conditions so the same endpoint can return different data.

Use AI to supplement missing fields with business‑semantic data.

Push stable test data to the team‑shared scope for reuse.

Summary

Request interception : Monkey‑patch XHR/Fetch with a WeakMap to prevent recursion.

Rule matching : Supports five parameter locations and multiple operators, prioritized matching.

Data quality : AI generates data based on schema descriptions and field semantics.

Team sharing : Three‑layer scope (local draft, personal cloud, team shared) with local priority control.

Future Plans

Traffic recording and replay to generate mock data from real traffic.

AI generation based on real data to further improve quality.

Mobile support to validate mock data on real devices.

Rule recommendation based on historical requests.

frontendTestingMockrequest-interceptionrule-engine
大转转FE
Written by

大转转FE

Regularly sharing the team's thoughts and insights on frontend development

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.