Mastering Million-Request Concurrency with Nginx, LVS, and Keepalived

This guide explains how to achieve million‑level concurrent request handling by combining Nginx, LVS, and Keepalived, detailing the architecture layers, load‑balancing design, high‑availability configuration, and practical sample configurations for each component in modern large‑scale web services.

Architect Chen
Architect Chen
Architect Chen
Mastering Million-Request Concurrency with Nginx, LVS, and Keepalived

Architecture Overview

High concurrency is a core challenge for large‑scale systems. A classic solution that can handle up to a million concurrent connections combines Nginx, Linux Virtual Server (LVS), and Keepalived. The three layers work together to provide fast packet forwarding, application‑level processing, and automatic failover.

Million concurrency architecture: Nginx + LVS + Keepalived
Million concurrency architecture: Nginx + LVS + Keepalived

LVS Layer Design

LVS operates at layer 4, forwarding TCP/UDP traffic based on IP and port. Deployed at the network edge, it runs in kernel space, incurring minimal overhead and supporting hundreds of thousands of connections per node.

┌──────── CDN ────────┐
│                    │
│   ┌──▼───┐   ┌──▼───┐
│   │ VIP │   │ Keepalived │
│   └──┬───┘   └──┬───┘
│      │ (VRRP) │
│   ┌───────▼──────────┐
│   │      LVS 主节点   │
│   └───────┬──────────┘
│      │
│   ┌───────▼──────────┐
│   │      LVS 备节点   │
│   └────────┬─────────┘
│      │
│   ┌──────────▼───────────┐
│   │   Nginx 集群 (10~50) │
│   └───────┬──────────────┘
│           │
│   ┌───────▼─────────┐
│   │ 业务服务集群   │
│   └─────────────────┘

Nginx Layer Design

Nginx provides layer 7 load balancing and reverse proxy functions. It terminates HTTP/HTTPS, performs request routing, caching, rate limiting, and static‑dynamic separation. Deployed behind LVS, it handles application‑level traffic.

Keepalived Layer Design

Keepalived supplies high availability for the LVS layer by managing a virtual IP (VIP) using VRRP. It ensures automatic failover: if the master node fails, the VIP migrates to the backup node without service interruption.

vrrp_instance VI_1 {
    state MASTER
    interface eth0          # network interface
    virtual_router_id 51    # same ID on master and backup
    priority 100           # higher priority = master
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.100      # virtual IP (VIP)
    }
}

Request Flow

Client → VIP → LVS + Keepalived (master/backup) → Direct Routing (DR) → Nginx cluster → Application servers (e.g., Tomcat).

This architecture can horizontally scale the Nginx cluster to 10‑50 instances, supporting millions of concurrent connections while maintaining high availability.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

architectureHigh ConcurrencyNginxLVSKeepalived
Architect Chen
Written by

Architect Chen

Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.