High‑Concurrency Price Protection System: Rate Limiting, Degradation, Caching, Sharding, and Scalable Task Processing

This article describes how JD built a high‑concurrency price‑protection service for the 618 promotion, covering rate‑limiting, degradation, CDN and data caching, database sharding with smooth expansion, and a multi‑stage, fault‑tolerant task‑processing workflow to achieve unlimited scalability.

JD Tech
JD Tech
JD Tech
High‑Concurrency Price Protection System: Rate Limiting, Degradation, Caching, Sharding, and Scalable Task Processing

During JD's 618 promotion, a massive surge in orders and frequent price changes required a robust price‑protection service that could maintain user experience, system stability, high availability, and fast calculations under extreme concurrency.

Rate Limiting (High Wall)

Normal user limiting : Based on load testing, each server’s maximum capacity is determined and a request counter (e.g., 10,000 requests per minute) is applied; exceeding the threshold triggers degradation, after which the counter resets.

Violent user limiting : Malicious traffic is filtered using IP/user blacklists; Redis atomic counters enforce a stricter limit (e.g., 120 requests per minute per IP) to protect normal users.

Degradation

When an interface fails, a centralized configuration switch managed by Zookeeper disables the faulty service and returns a fallback result, while local snapshots ensure continuity if the central switch fails.

Data Preparation (Broad Accumulation)

Static resources are cached via CDN, and frequently accessed data (e.g., order snapshots) are proactively cached to avoid real‑time DB hits, reducing backend pressure.

Simplify Processing (Simplify Complexity)

Front‑end simplification : Load only essential data initially; use AJAX to fetch secondary information later, ensuring quick page rendering.

Back‑end simplification : A three‑step async flow—insert anti‑duplicate record, save application data, dispatch processing task—allows rapid intake and later acceleration of task execution.

Merge Requests (One‑to‑One)

Combine multiple price‑protection requests per order into a single AJAX call, reducing the number of backend connections.

Front‑end and Back‑end Separation

Cluster resources by access source (mobile vs. PC) and by primary vs. secondary business flows, isolating non‑critical services to prevent them from affecting the core price‑protection workflow.

Database Sharding and Smooth Expansion

Data is sharded by user PIN using a hash‑mod algorithm. To expand from 2 to 8 shards, a binary‑tree‑style approach adds new replicas, switches them to master, updates routing to hash%8, and migrates data in phases, followed by cleanup of redundant data.

Unlimited Processing

Separate order‑intake clusters from task‑processing clusters: intake writes to a business DB, publishes tasks to JMQ, and dedicated processing clusters consume the messages, allowing independent scaling of each part.

Fast and Reliable Task Execution (Rapid Victory)

Tasks are modeled as workflow instances: a template generates an order, which contains multiple task nodes. The system progresses through four stages—periodic task fetching, data chunking, removal of template dimension, and message‑driven execution—ensuring high throughput and fault tolerance.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendshardingcachingHigh Concurrencyrate limitingTask Processing
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.