Fundamentals 17 min read

Why Load Balancing Matters: Understanding DNS, Hardware, and Software Strategies

This article explains the origins, types, and inner workings of load balancing—including DNS, hardware, and software solutions—covers the Linux Virtual Server (LVS) architecture, netfilter fundamentals, and compares DR, NAT, and Tunnel modes with their advantages, drawbacks, and ideal use cases.

Efficient Ops
Efficient Ops
Efficient Ops
Why Load Balancing Matters: Understanding DNS, Hardware, and Software Strategies

Origin of Load Balancing

In the early stages of a service, a single server is used, but as traffic grows the server hits a performance ceiling, requiring a cluster of servers to increase overall processing capacity.

To expose a unified entry point, a traffic scheduler (load balancer) distributes incoming requests across the cluster using balancing algorithms.

Benefits of using load balancing include improved overall performance, scalability, and availability.

Load Balancing Types

Broadly, load balancers can be classified into three categories: DNS‑based, hardware, and software solutions.

1) DNS‑Based Load Balancing

DNS load balancing resolves a domain name to multiple IP addresses, each pointing to a different server instance. It is simple and low‑cost but suffers from delayed failover, coarse traffic granularity, limited algorithms (typically round‑robin), and a restricted IP list size.

2) Hardware Load Balancing

Dedicated hardware devices (e.g.,

F5

and

A10

) provide powerful features and high performance but are expensive and typically used by large enterprises.

Advantages: strong functionality, high performance, high stability, built‑in security (firewall, DDoS protection, SNAT).

Disadvantages: high cost, poor scalability, complex debugging and maintenance, requires specialized personnel.

3) Software Load Balancing

Software solutions run on ordinary servers. Common options are

Nginx

,

HAproxy

, and

LVS

.

Nginx

: layer‑7 load balancing, supports HTTP, E‑mail, and also layer‑4.

HAproxy

: layer‑7 rules, high performance, used by OpenStack.

LVS

(Linux Virtual Server): kernel‑level, layer‑3, highest performance among software balancers.

Software load balancers are easy to operate, cheap (free software, only server cost), and flexible (choose between layer‑4 and layer‑7).

LVS Overview

LVS, initiated by Dr. Zhang Wensong, is an open‑source project now part of the standard Linux kernel. It offers reliability, high performance, scalability, and operability at low cost.

Netfilter Basics

LVS relies on the Linux kernel netfilter framework. Netfilter provides hook points (PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING) for packet filtering, NAT, and connection tracking.

LVS Working Modes

LVS supports three primary modes, each with distinct characteristics:

DR

(Direct Routing)

NAT

(Network Address Translation)

Tunnel

(IPIP tunneling)

An additional

FullNAT

mode, originated by Baidu and later adopted by Alibaba, is available in a separate open‑source repository.

DR Mode

In Direct Routing, the request packet reaches the LVS, which selects a real server (RS) and forwards the packet directly to it. The response is sent from the RS straight to the client, bypassing LVS. This yields high performance because only small request packets traverse LVS.

Advantages: high response performance, client IP preserved.

Disadvantages: LVS and RS must be on the same physical network, no port mapping, requires specific kernel settings.

Use case: scenarios demanding the highest performance where preserving client IP is essential.

NAT Mode

In NAT mode, both request and response traffic pass through LVS. LVS rewrites the destination IP to the selected RS and later rewrites the source IP back to the virtual IP (VIP) before sending the response to the client. NAT supports port mapping and Windows servers but introduces additional load on LVS.

Advantages: supports Windows, allows port mapping.

Disadvantages: RS must configure a gateway, both directions load LVS.

Use case: environments where Windows servers are involved.

Tunnel Mode

Tunnel mode adds an extra IP header (IPIP) and forwards the packet to the RS. The RS removes the tunnel header and processes the packet as if it arrived directly. Responses go from the RS to the client without passing through LVS, combining DR‑like performance with cross‑datacenter capability.

Advantages: single‑arm architecture reduces LVS load, minimal packet modification, supports cross‑datacenter deployment.

Disadvantages: requires ipip module on RS, tunnel header may cause fragmentation, fixed tunnel IP can lead to uneven hash distribution, no port mapping.

Use case: high‑performance forwarding with cross‑region requirements.

Understanding these principles equips readers to choose the appropriate load‑balancing strategy for their infrastructure.

Load BalancingLinuxNetworkingnetfilterLVSDR modeNAT modeTunnel mode
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.