Backend Development 12 min read

Designing a High‑Concurrency Flash‑Sale Architecture: Nginx, Redis, MQ, and Safety Measures

This article presents a comprehensive backend architecture for flash‑sale systems, covering Nginx static‑dynamic separation, Redis‑based rate limiting and distributed locks, MQ buffering, database sharding, safety protections, page optimization, and detailed Nginx configuration examples to handle massive concurrent traffic.

Top Architect
Top Architect
Top Architect
Designing a High‑Concurrency Flash‑Sale Architecture: Nginx, Redis, MQ, and Safety Measures

Architecture Diagram

Nginx + front‑back separation + CDN cache + gateway (rate limiting + circuit breaking) form the routing layer, followed by a Redis cluster for hot‑data caching and distributed locks, an MQ cluster, a business processing layer, and a database layer with read/write separation and hot‑spot isolation.

Characteristics of Flash‑Sale Business

Massive simultaneous page refreshes.

Massive simultaneous purchase attempts.

Potential malicious competition from automated bots.

Overall Approach

Peak‑shaving rate limiting: front‑end + Redis intercepts requests; only requests that successfully decrement Redis counters proceed downstream.

MQ buffers orders to protect the order‑processing layer; consumers fetch tasks based on their capacity, controlling downstream pressure.

Introduce answer‑captcha and random request delays to smooth traffic spikes.

Security protection: front‑end validates activity start time and prevents duplicate clicks; IP/UserID rate limiting and blacklists; overload discard when QPS or CPU exceeds thresholds.

Page optimization: simplify flash‑sale pages, minimize image size, JS/CSS volume, and separate static resources.

Asynchronous processing: after Redis lock acquisition, push subsequent tasks to a thread pool; the thread pool forwards tasks to MQ for asynchronous handling by order, inventory, payment, and coupon services.

Hot‑spot isolation: separate flash‑sale traffic from normal services via cluster routing, MQ segregation, and optional database sharding.

Avoid single points of failure; degrade non‑essential features during peak load.

Nginx Design Details

Static‑dynamic separation, avoiding Tomcat for static resources.

server {
    listen 8088;
    location ~ \.(gif|jpg|jpeg|png|bmp|swf)$ {
        root C:/Users/502764158/Desktop/test;
    }
    location ~ \.(jsp|do)$ {
        proxy_pass http://localhost:8082;
    }
}

Enable gzip compression to reduce static file size and bandwidth usage.

gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_comp_level 3;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;

Configure upstream cluster load balancing and failover parameters (fail_timeout, max_fails, proxy_connect_timeout).

upstream netitcast.com {
    server 127.0.0.1:8080;
    server 127.0.0.1:38083;
    server 127.0.0.1:8083;
}
server {
    listen 88;
    server_name localhost;
    location / {
        proxy_pass http://netitcast.com;
        proxy_connect_timeout 1;
        fail_timeout 5;
    }
}

Integrate Varnish for static resource caching and tengine for overload protection.

Page Optimization Details

Reduce interaction pressure by consolidating JS/CSS files and limiting image usage on flash‑sale pages.

Place JS and CSS in a few bundles to minimize browser‑backend requests.

Avoid large or numerous images on flash‑sale pages.

Security controls:

Validate request timing; reject requests before the flash‑sale starts, with backend verification.

Asynchronous purchase via AJAX instead of full page refresh.

Redis‑based IP and UserID rate limiting.

Redis Cluster Applications

Distributed (pessimistic) locks.

Cache hot data such as inventory; optionally use local cache with DB consistency.

Example SQL for inventory decrement: UPDATE inventory SET stock = stock - 1 WHERE id = ? AND stock > 1;

Captcha Design

Prevent bot interference and give genuine users a chance.

Delay requests to spread traffic spikes.

Two approaches:

Failing verification triggers a full page refresh (e.g., 12306), increasing server load but deterring bots.

Failing verification shows an error without refresh; client‑side JS validates answers using pre‑loaded MD5‑hashed solutions, incorporating userId and primary key to ensure uniqueness.

Response time analysis can also flag automated attempts (e.g., answers under 1 s).

Key Considerations

Split transactions to improve concurrency (e.g., separate inventory decrement from order creation).

Database sharding introduces distributed transaction challenges; monitor and manually reconcile if needed.

backendarchitectureRedishigh concurrencyMQNginxflash sale
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.