Backend Development 6 min read

Rethinking “Add Machines” and “Traffic Shifting” as Effective Scaling Strategies

The article explains why adding servers and shifting traffic, often dismissed as low‑effort tricks, actually require substantial architectural planning, state management, load balancing, sharding, and multi‑datacenter coordination to reliably handle high‑traffic scenarios.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Rethinking “Add Machines” and “Traffic Shifting” as Effective Scaling Strategies

This article, originally published on the HelloJava WeChat account, examines two common scaling tactics—adding machines and traffic shifting—and argues that they are not low‑effort solutions.

First point: Adding machines – Early systems typically consist of a single application server and one or two database servers; when traffic spikes, the instinct is to simply add more machines. However, expanding an application tier from one to two servers introduces challenges such as maintaining consistent application state (e.g., user login information) across instances and handling clustering concerns like load balancing and health checks. Adding database servers is even more complex, requiring sharding or partitioning to actually relieve pressure; otherwise the added capacity is ineffective. As the number of application servers grows, database connection pools become bottlenecks, meaning that simply buying more hardware does not solve the problem.

When a system scales to hundreds or thousands of services, deciding how many machines each service needs and dealing with physical constraints (e.g., a data center that cannot house all new servers) become major architectural questions. A strong architect must balance manpower, expected growth, cost control, and technical debt to design an architecture that can rely on adding machines while preparing for the next scaling cycle.

Second point: Traffic shifting – Traffic shifting involves moving traffic from one data center to another, often within the same city, to quickly recover from failures. This requires solving technical issues such as seamless database address switching (master‑slave failover without restarting applications) and handling DNS VIP removal, which lacks health checks and may involve propagation delays. Shifting traffic to a remote data center adds latency challenges and typically demands multi‑active write setups to maintain data correctness.

Like adding machines, traffic shifting demands robust technical support but can dramatically improve system availability. With multiple data centers and fast traffic‑shifting mechanisms, any single‑site failure can be mitigated, extending overall uptime.

BackendSystem Architectureload balancingScalingHigh Traffictraffic shifting
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.