Load Balancing Layer Design Scenarios and Solution Architectures
This article examines various business load scenarios for a logistics management system and presents four progressive load‑balancing architectures—ranging from simple Nginx/Haproxy to DNS round‑robin with LVS and Keepalived—while defining key performance terms and outlining future discussion topics.
Building on the previous article that introduced the layered architecture of a standard web system, this piece focuses on the load‑balancing layer, explaining its purpose of distributing external traffic across internal processing nodes and outlining typical design considerations such as cost, scalability, and operational complexity.
Different load scenarios Four representative logistics‑system scenarios are described: (1) a national logistics park with modest traffic and a requirement for future scalability; (2) rapid growth after six months, reaching 10 × 10⁴ daily RUV and 5 × 10⁵ PV; (3) provincial expansion involving tens of thousands of agents and 2.5 × 10⁶ PV; (4) nationwide rollout to over a thousand parks, millions of agents and vehicles, demanding massive peak‑capacity planning and integration with government vehicle‑information services.
Load‑balancing solution concepts
2.1 Independent Nginx/HAProxy solution – suitable for the first low‑traffic scenario, providing basic request routing and error handling.
2.2 Nginx/HAProxy + Keepalived – adds hot‑standby capability for increased stability as traffic grows.
2.3 LVS (DR) + Keepalived + Nginx – introduces LVS at the front to handle higher throughput and traffic spikes, delegating requests to a cluster of Nginx instances.
2.4 DNS round‑robin + LVS (DR) + Keepalived + Nginx – combines DNS‑level load distribution with two LVS groups and per‑service Nginx layers, enabling billion‑scale daily PV and independent load‑balancing for each subsystem (user info, order, vehicle info).
Key notes: when LVS is present, Nginx no longer needs Keepalived for hot‑standby because LVS provides health checks; however, LVS itself should still be protected by a Keepalived standby node.
Terminology TPS (transactions per second) measures backend processing capacity; PV (page views) counts total page requests; UV (unique visitors) counts distinct IPs; RUV (repeat user visitors) counts repeated accesses by the same user within a time window.
The next article will dive into the core principles and installation details of LVS, Keepalived, and Nginx, covering load‑balancing algorithms such as hash, round‑robin, and weight‑based distribution.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.