Operations 7 min read

Collaborative Load Testing for JD.com 11.11 Event: Organizational Changes, Scale Expansion, and ForceBot Traffic Recording & Replay

The article details JD.com's coordinated effort to prepare for the 11.11 shopping festival by expanding load‑testing scale, improving cross‑team collaboration, and enhancing the ForceBot platform with traffic recording and replay capabilities to achieve more realistic and efficient full‑chain performance evaluations.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Collaborative Load Testing for JD.com 11.11 Event: Organizational Changes, Scale Expansion, and ForceBot Traffic Recording & Replay

In the weeks leading up to JD.com's 11.11 shopping festival, two all‑night load‑testing drills were conducted, marking the final stage of the technical preparation where optimization, scaling, verification, and fault‑drill exercises dominate the workload.

Previously, performance testing was handled solely by a dedicated team, but this year the effort shifted to a joint operation involving system developers and testers, creating the biggest challenge of the preparation. The ForceBot team provided tools, training, and technical support to enable seamless collaboration.

By sharing responsibilities, development teams can focus on both componentized and legacy systems while handling the massive testing load. Testers assist developers in defining test scopes and timelines based on project releases and upstream/downstream dependencies, resulting in more comprehensive and objective test scenarios.

The collaborative approach has shortened testing cycles compared to the previous 6.18 preparation and enriched test scenarios. By mid‑October, the ForceBot platform had executed tens of thousands of tests, accumulated over 4,000 scripts, and fully equipped all systems for the upcoming 11.11 event.

Scale-wise, the number of load‑generation machines increased by 25% compared with the 6.18 event, introducing new challenges in traffic generation and real‑time result calculation. New capabilities such as thousand‑machine single‑task testing, mixed‑scenario construction, and million‑TPS generation were added.

The testing scope expanded beyond the traditional “golden flow” to include innovative services like Jingxi and Apollo, covering product details, search, cart, promotions, coupons, order placement, payment, and credit services across dozens of business lines and hundreds of systems, moving toward a fully automated, regularized testing regime.

ForceBot itself was upgraded with traffic recording and replay functions. Public‑network traffic is captured via a splitter, filtered, desensitized, encrypted, and stored for on‑demand replay, enabling realistic load generation while ensuring data security. For internal traffic, integration with the Moonlight Box system allows recording, storage in JMQ4, and flexible replay modes such as live‑record‑and‑replay and traffic amplification.

These enhancements reduce the dependency on developers for data preparation, mitigate resource and security risks, and lay the foundation for a continuous, automated performance testing practice that supports both public and private network scenarios.

The Engineering Efficiency Department’s tools team concludes that the collaborative, scaled, and tool‑enhanced approach significantly improves testing quality and efficiency, positioning JD.com for a successful 11.11 performance showcase.

operationstraffic replayload testingperformance engineeringJD.comforcebot
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.