Implementing Concurrency Limits in JavaScript Using Promise, Callbacks, and RxJS
The article demonstrates how to control the number of simultaneous HTTP requests in JavaScript by presenting three implementation styles—Promise, callback, and RxJS—along with real‑world examples, code snippets, and a discussion of their advantages and drawbacks.
This article demonstrates how to limit concurrent HTTP requests in JavaScript, covering three implementation styles—Promise, callback, and RxJS—through step‑by‑step examples drawn from a real‑world scenario that is also a common front‑end interview question.
Problem Definition
In a system that runs on HTTP/2, the back‑end does not provide batch endpoints, so the front‑end must request data for thousands of IDs one by one. Sending all requests at once makes the browser sluggish, so a concurrency ceiling (max) is required.
The task is to implement a function gets(ids, max) that fetches each ID while never exceeding max simultaneous requests.
Promise‑Based Solutions
Method 1 – Full Parallel (Promise.all)
Using Promise.all sends every request immediately, which causes the performance issue.
function gets(ids, max) {
return Promise.all(ids.map(id => get(id)));
}
function get(id) {
return new Promise(resolve => {
setTimeout(() => { resolve({ id }); }, Math.ceil(Math.random() * 5));
});
}Method 2 – Batch Parallel
Split the IDs into groups of size max and process each group with Promise.all sequentially.
function gets(ids, max) {
let index = 0;
const result = [];
function nextBatch() {
const batch = ids.slice(index, index + max);
index += max;
return Promise.all(batch.map(get)).then(res => {
result.push(...res);
if (index < ids.length) return nextBatch();
return result;
});
}
return nextBatch();
}Method 3 – Controlled Concurrency
Maintain a pool of at most max active promises; start a new request each time one finishes.
function gets(ids, max) {
return new Promise(resolve => {
const res = [];
let loadCount = 0;
let curIndex = 0;
function load(id, index) {
return get(id).then(data => {
loadCount++;
if (loadCount === ids.length) {
res[index] = data;
resolve(res);
} else {
curIndex++;
load(ids[curIndex]);
}
}, err => {
res[index] = err;
loadCount++;
curIndex++;
load(ids[curIndex]);
});
}
for (let i = 0; i < max && i < ids.length; i++) {
curIndex = i;
load(ids[i], i);
}
});
}Callback‑Based Solution
The same logic can be expressed with traditional Node‑style callbacks.
function get(id, success, error) {
setTimeout(() => success({ id }), Math.ceil(Math.random() * 5));
}
function gets(ids, max, success, error) {
const res = [];
let loadCount = 0;
let curIndex = 0;
function load(id, index) {
return get(id, data => {
loadCount++;
if (loadCount === ids.length) {
res[index] = data;
success(res);
} else {
curIndex++;
load(ids[curIndex]);
}
}, err => {
res[index] = err;
loadCount++;
curIndex++;
load(ids[curIndex]);
});
}
for (let i = 0; i < max && i < ids.length; i++) {
curIndex = i;
load(ids[i], i);
}
}RxJS Solutions
RxJS provides declarative operators that make concurrency control concise.
Method 1 – Full Parallel (forkJoin)
import { forkJoin } from 'rxjs';
function gets(ids) {
const observables = ids.map(get);
return forkJoin(observables);
}Method 2 – Batch Parallel (concatMap + forkJoin)
import { from, forkJoin } from 'rxjs';
import { concatMap, reduce } from 'rxjs/operators';
function gets(ids, max) {
const groups = [];
for (let i = 0; i < ids.length; i += max) {
groups.push(ids.slice(i, i + max));
}
return from(groups).pipe(
concatMap(group => forkJoin(group.map(get))),
reduce((acc, results) => acc.concat(results), [])
);
}Method 3 – Controlled Concurrency (mergeMap)
import { from } from 'rxjs';
import { mergeMap, map, reduce } from 'rxjs/operators';
function gets(ids, max) {
return from(ids).pipe(
mergeMap(id => get(id).pipe(map(result => ({ id, result }))), max),
reduce((acc, { id, result }) => acc.set(id, result), new Map()),
map(resMap => ids.map(id => resMap.get(id)))
);
}Conclusion
All three paradigms—Promise, callback, and RxJS—offer ways to enforce a concurrency limit. Full parallel is simple but can freeze the UI; batch parallel reduces load spikes but may waste time waiting for the slowest request in each batch; controlled concurrency provides the best balance of throughput and order preservation. Choose the approach that fits your project’s stack and performance requirements.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.