How Plug Revolutionizes API Capture and Mocking with AI‑Powered Automation

This article introduces Plug, a unified front‑end tool that combines non‑intrusive interface capture, flexible mocking, and large‑model assistance to streamline API development for both mini‑programs and PC, while addressing HTTPS proxy challenges and performance considerations.

Goodme Frontend Team
Goodme Frontend Team
Goodme Frontend Team
How Plug Revolutionizes API Capture and Mocking with AI‑Powered Automation

Introduction

The supply‑chain team at Guming needed a tool that could both capture API traffic and provide mock services without invasive code changes, supporting mini‑programs and PC browsers, simple configuration, and compatibility with existing YAPI workflows.

Current Situation

Various solutions have been tried:

Mini‑program developer tools – simple but require local compilation and cannot monitor production builds.

vConsole – easy to set up but offers poor mobile console experience and needs intrusive npm packages.

Charles – powerful and non‑intrusive but cumbersome to configure.

For mocking, options include static data, NPM‑based mocks, Charles, WeChat mock tools, and third‑party services like Apifox. Each suffers from drawbacks such as high cost, code intrusion, or lack of integration with private YAPI instances.

Goals

Zero code intrusion.

Combine traffic monitoring and mocking, compatible with mini‑programs and PC.

Simple configuration.

Do not interfere with developers' proxy (e.g., VPN) tools.

Seamlessly integrate with YAPI for one‑stop mock and post‑edit capabilities.

Allow generic business processing of mock data (e.g., force response code to 0).

Plug Feature Overview

Plug (meaning “plug” 🔌) emphasizes data connection.

Plug offers a web and app management console and includes:

API Capture

Supports http, https, and ws protocols, enabling mobile debugging and issue localization.

API Mock

Plug connects to YAPI, allowing batch and single‑endpoint mock generation, with a secondary editing layer for custom responses.

After editing, Plug uses the modified data for mocking.

Implementation Principles

The core of Plug is an interface proxy. Both mock and capture traffic pass through this proxy for further processing.

HTTP Proxy

All HTTP requests are intercepted; Plug decides whether to forward them to the original server or to the mock service, and synchronizes the traffic to the management console.

HTTPS Proxy

HTTPS adds certificate verification and encryption, preventing a plain man‑in‑the‑middle from reading data. Plug generates a forged root CA and corresponding sub‑certificates to intercept HTTPS traffic while maintaining mutual verification.

The diagram omits some HTTPS details; refer to external resources for deeper understanding.

CA Certificate

CA stands for Certificate Authority.

Plug must forge a root certificate and generate sub‑certificates to act as both client and server in the HTTPS handshake.

System Proxy Compatibility

Plug can coexist with existing “scientific‑internet” proxy tools by pooling TCP connections and reusing them, ensuring that traffic still passes through the user’s preferred proxy when needed.

Plug & Large Language Models

Plug’s mock generation faced three issues:

Mock data lacked realism, requiring extensive post‑editing.

Enum values defined only in field descriptions were ignored.

Pagination logic mismatched data.length.

To address these, the team explored two approaches:

Using Public LLM APIs

Full‑size model capabilities.

Automatic parameter tuning (temperature, seed, etc.).

Stable mock output.

Drawbacks include high latency (>15 s) and occasional empty responses.

Deploying Local LLMs

Using ollama to run distilled models locally reduced latency to ~10 s, but introduced randomness and poor JSON handling.

Solutions:

Fix temperature < 1 and set a deterministic random seed to stabilize output.

Provide a detailed JSON schema (shown below) instead of format: json to guide the model.

"format": {"type":"object","properties":{"data":{"type":"array","items":{"type":"object","properties":{"messageBizType":{"type":"string"},// other fields}}</n}

After these adjustments, local models produced consistent, well‑structured mock data.

Model Selection in Plug

The latest beta of Plug enables experimental AI‑assisted mocking, preferring locally deployed models but allowing optional use of Volcano Engine’s DeepSeek API.

Conclusion

The article presented Plug’s end‑to‑end solution for API capture and mock within Guming’s supply‑chain projects, highlighted the integration of large language models to improve mock realism, and discussed ongoing stability improvements.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

ProxyBackend Developmentlarge language modelToolingAPI mockingInterface Capture
Goodme Frontend Team
Written by

Goodme Frontend Team

Regularly sharing the team's insights and expertise in the frontend field

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.