How We Cut WeChat Mini‑Program Startup Time by 45% with Smart Async Splitting
This article details how the Huolala team tackled growing performance problems in their WeChat mini‑program—such as long startup, page‑switch lag, and increasing bundle size—by measuring key metrics, setting aggressive targets, and applying a series of optimizations including performance data reporting, code‑splitting, async component loading, and skeleton screens, ultimately achieving significant reductions in launch and transition times while keeping the main bundle size under control.
Background
From 2021 to 2022 Huolala’s WeChat mini‑program user base exploded, making the app more complex and critical to business. Frequent feature releases caused rising code complexity, frequent errors, launch failures, and sluggish page switches. In early 2023 a dedicated performance‑optimization project was launched, and after more than a year of continuous work the team compiled a set of optimization methods.
Optimization Results
The following recordings show the same phone on a 4G network before and after optimization; startup and page‑switch times were dramatically reduced.
Although the total code size grew by 16.2%, the main bundle shrank by 18.2% thanks to continuous optimization.
Optimization Approach
1. Measure
Collect performance data via WeChat We‑Analysis.
Startup time : app cannot open or starts slowly.
Page‑switch time : noticeable lag.
Performance rating : Poor, Average, Good, Excellent.
Peer comparison : Worse than peers / Better than peers.
Self comparison : Track trends using T‑1, T‑7 data because official peer data is not transparent.
2. Set Goals
Reduce startup time by 45% and page‑switch time by 50%.
Overall startup performance rating: Excellent; runtime performance rating: Excellent.
3. Analyze
Scenario analysis: use WeChat We‑Analysis to find high‑latency launch scenarios.
Path analysis: collect performance data via wx.getPerformance() and send it to a telemetry platform.
Identify optimization hotspots.
4. Implement Solutions
How to keep performance improvements in a fast‑iteration environment and ensure changes are properly tested and validated.
The biggest challenge is not writing code but continuously delivering technical improvements amid rapid business changes. Simplified workflow diagram:
Define optimization plan : create phased, module‑level plans based on analysis.
Establish testing mechanism : develop changes in a shared test environment, run 1‑2 test cycles, and only promote after no regressions.
Continuous optimization : after each release, collect performance data, check for regressions, and plan next actions.
Confirm scope → Confirm test scope → Confirm benefit (review & sustain)
Technical Highlights
The following code works with @vue/cli and @vue/composition-api in a uniapp Vue2 project. Adaptations are needed for other frameworks.
1. Performance Data Reporting
Besides WeChat We‑Analysis, the platform provides a performance API. Below is a minimal observer that collects key metrics and pushes them to a telemetry endpoint.
const StatisticsEvent = {
appLaunch: "appLaunchTime",
evaluateScript: "evaluateScriptTime",
downloadPackage: "downloadPackageTime",
route: "pageLoadTime",
firstRender: "firstRenderTime"
};
const performanceList = [];
function observePerformance() {
try {
const perform = wx?.getPerformance?.();
const observer = perform?.createObserver(entryList => {
const entries = entryList.getEntries();
entries.forEach(entry => {
const { name, path } = entry;
const sensor = StatisticsEvent[name];
if (sensor) {
performanceList.push({
event: sensor,
params: { ...entry, pagePath: path }
});
}
});
});
observer?.observe({ entryTypes: ["navigation", "render", "script", "loadPackage"] });
} catch (error) {
// ignore
}
}
observePerformance();Import the observer in main.ts:
import 'performance-observer';2. Code Splitting & Async Loading
Splitting packages is the most effective way to reduce mini‑program startup time.
By turning page‑level sub‑packages into component‑level or even file‑level async loads, code that would otherwise reside in the main bundle can be loaded on demand, dramatically shrinking the main bundle.
2.1 JS Code Async
Two reference articles (links omitted) cover the details; the core ideas are:
Use __non_webpack_require__ to load third‑party scripts without Webpack bundling.
Copy required files into the built output with copy-webpack-plugin.
// src/utils/async-load.ts
- require('../pages/async-lib/mqtt.min.js', res => { ... });
+ __non_webpack_require__('../pages/async-lib/mqtt.min.js', res => { ... }, ({mod, errMsg}) => { ... }); // vue.config.js
+ const CopyWebpackPlugin = require('copy-webpack-plugin');
module.exports = {
configureWebpack: {
plugins: [
+ new CopyWebpackPlugin([
+ {
+ from: path.join(__dirname, 'src/pages/async-lib'),
+ to: path.join(__dirname, 'dist', process.env.NODE_ENV === 'production' ? 'build' : 'dev', process.env.UNI_PLATFORM, 'pages/async-lib')
+ }
+ ]),
]
}
};2.2 Component Async
Convert heavy JS files into components and load them asynchronously. Example component:
<template>
<view class="content">{{ title }}</view>
</template>
<script setup lang="ts">
import { Ref, ref } from '@vue/composition-api';
const title: Ref<string> = ref("I'm second title");
const setChineseTitle = () => { title.value = '我是副标题'; };
defineExpose({ title, setChineseTitle });
</script>
<style></style>Import the component in the parent page to ensure it is bundled:
// pages/index-subpack/index.vue
import SecondTitle from './components/second-title.vue';Use the component with an event to expose lifecycle methods:
<second-title @loaded="handleSecondLoaded" />2.3 Page Async
Wrap an entire page as a component and load it lazily, keeping the original page as a thin placeholder.
// pages/info-subpack/async-page.vue
<template>
<view class="content">
<view><text class="title">{{ title }}</text></view>
</view>
</template>
<script setup lang="ts">
import { onLoad, onShow } from '@dcloudio/uni-app';
import { ref } from '@vue/composition-api';
const title = ref('我是个人中心页面,我有很多代码');
onLoad(options => console.log('页面参数', options));
onShow(() => console.log('onShow'));
</script>
<style></style>Configure the main package to load the async component:
// pages/info/index.vue
<template>
<async-page />
</template>Synchronize lifecycle hooks between the placeholder page and the async component using a small hook library ( createPromiseEvent, unWrapEvent) so that onLoad and onShow still run correctly after the component loads.
function createPromiseEvent<R>() {
let resolve, reject;
const promise = new Promise((res, rej) => { resolve = res; reject = rej; });
return { promise, resolve: resolve!, reject: reject! };
}
function unWrapEvent(e) {
return Array.isArray(e?.detail?.__args__) ? e.detail.__args__[0] : e;
}3. Skeleton Screens
Skeleton screens reduce perceived white‑screen time.
Two approaches are common in mini‑programs:
Low‑cost: hand‑crafted static placeholders using colored blocks.
Higher‑cost: generate skeletons with the WeChat DevTools and prune unnecessary nodes.
Summary
Through aggressive sub‑package splitting, async component/page loading, and skeleton screens, the main bundle size was reduced by ~1 MB and startup/page‑switch latency was cut by roughly half. The team also completed over 20 other optimizations, establishing a sustainable monitoring and anti‑regression workflow.
Prioritization
Low effort, high gain : async sub‑packages + skeleton screens.
High effort, high gain : on‑demand loading and pre‑fetching of critical data.
Low effort, low gain : further render‑level tweaks.
High effort, low gain : fine‑grained control of requirement implementation.
Future Outlook
Shift focus from pure metric improvement to real user experience—e.g., replace skeletons with cached real pages. Continuously monitor both quantitative performance (startup, page‑switch) and qualitative UX (usability, emotional impact) using daily We‑Analysis, AI‑tagged feedback, and version‑by‑version recordings to detect regressions early and maintain a data‑driven optimization loop.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
