How Request Queuing Boosted WeChat Mini‑Program Performance by 15%
By introducing a priority‑based request queue in a WeChat mini‑program, the team reduced key request latency by 50‑100 ms (about 15% faster), alleviated iOS UI stalls, and demonstrated measurable performance gains especially for users on weak networks.
1. Trigger 🔥
On a seemingly ordinary workday the mini‑program became extremely laggy on iOS. Investigation revealed that a third‑party reporting service was stuck in a pending state, exhausting the concurrency limit of ten for
wx.request,
wx.uploadFileand
wx.downloadFile.
Maximum concurrent requests for wx.request , wx.uploadFile , wx.downloadFile is 10.
When ten requests are pending, subsequent calls are blocked. Using Whistle’s
resDelaywe simulated a 5000 ms delay for the report and log requests, which kept the business request get_homepage_feeds_h5 pending and caused page rendering to stall.
After identifying the issue, the offending reporting request was disabled, instantly resolving the freeze. The incident highlighted how low‑priority reporting can severely impact high‑priority business requests, prompting the need for a request‑priority strategy.
2. Idea 🔍
Network latency depends on many external factors, but we can improve perceived performance by assigning priorities to mini‑program requests. When multiple requests run concurrently, low‑priority reporting requests should yield to high‑priority business requests.
3. Implementation 📃
The request‑priority strategy consists of four rules:
Classify requests into high and low priority levels.
When the number of concurrent requests exceeds a threshold, only high‑priority requests are sent; low‑priority ones are intercepted.
After the concurrent count drops, intercepted low‑priority requests are released.
Set a maximum waiting time; if exceeded, low‑priority requests are forced to send to avoid excessive delay.
All HTTPS requests in the mini‑program are sent via
wx.request. By intercepting this API we can control the sending order. The logic is encapsulated in a queue request module that provides a method with the same signature as
wx.requestto replace the original API.
3.1. Key Configuration
threshold : the concurrency limit at which low‑priority requests are delayed.
maxWaitingTime : the longest time a request may stay in the waiting queue before being forced to send.
lowPriority : a set of matching rules (e.g., regex) used to identify low‑priority requests.
We set
threshold = 5to reserve five slots for potential business requests and to reduce network contention when five requests are already in flight.
3.2. Core Design
The module is built from three classes:
QueueRequest : the request dispatcher that holds instances of
RequestPooland
WaitingQueueand provides a
request(opts)method replacing
wx.request.
WaitingQueue : maintains a queue of delayed requests. It enqueues low‑priority requests when the threshold is exceeded and dequeues them when slots become available or the max waiting time is reached.
RequestPool : tracks all in‑flight requests, exposes the current request count, and notifies the waiting queue when a request finishes.
Key class snippets:
<code class="language-typescript">class QueueRequest {
private requestPool: RequestPool;
private waitingQueue: WaitingQueue;
request(opts: WechatMiniprogram.RequestOption): WechatMiniprogram.RequestTask {
// dispatch logic
}
}</code> <code class="language-typescript">class WaitingQueue {
private queue: QueueRequestOption[];
private checkQueue();
public enqueue(opts: QueueRequestOption);
public dequeue();
public getWaitingNum(): number;
}</code> <code class="language-typescript">interface RequestPoolConfig { onReqComplete?: () => void; }
class RequestPool {
private originRequest = wx.request.bind(wx);
private pool;
public add(opts: QueueRequestOption);
private remove(seq: number);
public getReqNum(): number;
}</code>Because
wx.requestis read‑only, we replace it via
Object.defineProperty:
<code class="language-typescript">Object.defineProperty(wx, 'request', { value: this.request.bind(this) });
export function useRequestQueue(config: QueueRequestConfig) {
const queueRequest = new QueueRequest(config);
queueRequest.make();
return queueRequest;
}</code>3.3. RequestTask Proxy
To keep the original
RequestTaskcontract while a request is still queued, we introduce
RequestTaskProxy. It records method calls in an internal
operationsarray and replays them once the real
RequestTaskis created.
<code class="language-typescript">export class RequestTaskProxy implements WechatMiniprogram.RequestTask {
private task?: WechatMiniprogram.RequestTask;
private operations: [];
public setRequestTask(requestTask: WechatMiniprogram.RequestTask) {
this.task = requestTask;
this.operations.forEach(op => { /* replay */ });
this.operations = [];
}
public abort() {
if (this.task) { this.task.abort(); }
else { this.operations.push({ type: 'abort' }); }
}
// other RequestTask methods similarly proxied
}</code>With the proxy in place, the queue module can return a
RequestTaskimmediately, preserving the original API semantics.
4. Optimization Results 📈
Without any backend changes, front‑end request ordering was adjusted and the latency from initiating a business request to receiving its result was measured.
In a gray‑release experiment, users with the priority strategy (yellow curve) showed an average reduction of 50‑100 ms (≈15 %) for high‑priority business requests compared to the control group (green curve).
Further analysis across different percentiles (80th, 50th, 20th) revealed that the improvement is most pronounced for long‑latency requests, i.e., users on weak networks benefit the most.
These results confirm that front‑end request prioritization is an effective, low‑cost technique to improve perceived performance, especially under constrained network conditions.
Tencent IMWeb Frontend Team
IMWeb Frontend Community gathering frontend development enthusiasts. Follow us for refined live courses by top experts, cutting‑edge technical posts, and to sharpen your frontend skills.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.