How to Scale Mini Programs: Engineering Practices from JD's Frontend Team
This article shares JD's front‑end team's experience engineering large‑scale mini programs, covering standards, single‑page extraction, code duplication removal, automated testing with a sandbox, package‑size optimization, smart splitting, conditional compilation, and a continuous‑integration pipeline that together address the challenges of rapid growth and release complexity.
Background
In October 2019 the author presented a talk on mini‑program engineering at Alibaba's Frontend Artists Salon, summarizing JD's experience with large‑scale mini programs.
Standards
The team defined comprehensive standards including directory structure, Git branching, code style, development workflow, and user‑experience guidelines.
Initial Development Mode
Early on they combined the built‑in mini‑program developer tools with Gulp to handle Sass processing and asset packaging.
Scale Growth Issues
From a first version in January 2017 with 15 pages (<1 MB) and fewer than 10 developers, the project grew to over 200 pages, >10 MB, and more than 100 developers, causing problems in development, testing, packaging and release.
Development Debugging Bottleneck
The project eventually contained over 6,000 files, making the IDE and developer tools sluggish. The solution was a "single‑page extraction" tool that analyses file dependencies and extracts only the files needed for the current page, reducing the loaded file count from 6,000+ to about 200 and cutting preview time from ~100 s to ~15 s.
Code Redundancy
Duplicate or similar functions appeared across pages due to copy‑paste and unfamiliarity with the codebase. The team addressed this by componentization—extracting common code into NPM packages—and by code‑audit tools that detect and report duplicate snippets.
Testing Automation (Sandbox)
To cope with the explosion of pages, the team built a sandbox testing tool with four layers: test‑case definition, step‑control API, a sandbox environment that runs the mini‑program on V8, and a simulated WeChat API layer. The tool integrates with Mocha tests to automate user‑behavior simulation and result verification.
Package Size Limits
WeChat mini programs have a total package limit of 8 MB and a sub‑package limit of 2 MB. Exceeding these limits delays releases and can cause performance issues. The team applied dependency analysis to delete unused files/functions, smart splitting to dynamically allocate NPM packages between main and sub‑packages, and tree‑shaking after converting all code to ES6 modules.
Multi‑Mini‑Program Code Reuse
Shared pages and components across multiple mini programs introduced conditional code bloat. The solution was conditional compilation (using comment‑based syntax) and file‑suffix compilation, allowing the CLI to generate distinct builds for each target.
Continuous Integration
A CI system was built to automate the release workflow: feature entry, release planning, code build, Git operations, automated merging, static code scanning, and deployment. This reduced manual steps and error rates.
Conclusion
Large‑scale mini‑program development requires more than coding standards; it needs automated testing, componentization to eliminate redundancy, tooling for development and packaging, and a robust CI pipeline to streamline releases.
WecTeam
WecTeam (维C团) is the front‑end technology team of JD.com’s Jingxi business unit, focusing on front‑end engineering, web performance optimization, mini‑program and app development, serverless, multi‑platform reuse, and visual building.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
