Understanding Linux SO_REUSEPORT: Enabling Multiple Processes to Share the Same Port
This article explains the Linux SO_REUSEPORT feature introduced in kernel 3.9, how it allows multiple processes to bind and listen on the same port, the kernel's load‑balancing mechanism, and provides practical C examples and verification steps for developers.
Hello everyone, I'm Fei! If a process on your server is already listening on port 6000, can another process also bind and listen on the same port? Many will answer no because they have encountered the "Address already in use" error, which occurs when a port is occupied.
However, in Linux kernels newer than 3.9, multiple processes can bind to the same port using the REUSEPORT feature.
This article describes why REUSEPORT was created, how the kernel selects a process when several share a port, and how it can improve server performance.
1. Problems REUSEPORT Solves
Historically, each service (e.g., Nginx on 80/8080, MySQL on 3306) had a dedicated listening port. As web traffic and mobile devices grew after 2010, the single‑port model became a bottleneck for high‑concurrency services.
Two classic multi‑process models were used:
A dispatcher process accepts new connections and forwards them to worker processes, incurring extra context switches and creating a dispatcher bottleneck.
Multiple workers share a single listening socket (as Nginx does), but they must use a lock to ensure only one worker accepts a connection, leading to lock contention.
2. Birth of REUSEPORT
Linux 3.9 (2013) introduced REUSEPORT to let several user‑space processes bind to the same port and let the kernel perform load balancing.
2.1 Setting SO_REUSEPORT
In C you enable the feature with:
setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, ...);The kernel sets the sk_reuseport field of the socket to 1.
//file: net/core/sock.c
int sock_setsockopt(struct socket *sock, int level, int optname,
char __user *optval, unsigned int optlen) {
...
switch (optname) {
...
case SO_REUSEPORT:
sk->sk_reuseport = valbool;
...
}
}2.2 Bind Handling
During inet_bind , the kernel searches a hash table of bound sockets. If a matching port is found in the same network namespace, it checks whether both the existing and the new socket have REUSEPORT enabled. If so, binding succeeds.
//file: net/ipv4/inet_connection_sock.c
int inet_csk_get_port(struct sock *sk, unsigned short snum) {
...
if (net_eq(ib_net(tb), net) && tb->port == snum)
goto tb_found;
...
if (((tb->fastreuse > 0 && sk->sk_reuse && sk->sk_state != TCP_LISTEN) ||
(tb->fastreuseport > 0 && sk->sk_reuseport && uid_eq(tb->fastuid, uid))) &&
smallest_size == -1) {
goto success;
} else {
// bind conflict
...
}
}The uid_eq check ensures only sockets owned by the same user can share the port, preventing cross‑user traffic hijacking.
2.3 Accepting New Connections
When several sockets listen on the same port, the kernel uses a hash‑plus‑score algorithm to pick the best socket. It iterates over all listening sockets in the hash bucket, computes a score based on address match, family, and other criteria, and selects the highest‑scoring socket. If scores tie, a pseudo‑random function distributes connections evenly.
//file: net/ipv4/inet_hashtables.c
struct sock *__inet_lookup_listener(struct net *net,
struct inet_hashinfo *hashinfo, const __be32 saddr, __be16 sport,
const __be32 daddr, const unsigned short hnum, const int dif) {
...
sk_nulls_for_each_rcu(sk, node, &ilb->head) {
score = compute_score(sk, net, hnum, daddr, dif);
if (score > hiscore) {
result = sk;
hiscore = score;
reuseport = sk->sk_reuseport;
if (reuseport) {
phash = inet_ehashfn(net, daddr, hnum, saddr, sport);
matches = 1;
}
} else if (score == hiscore && reuseport) {
matches++;
if (((u64)phash * matches) >> 32 == 0)
result = sk;
phash = next_pseudo_random32(phash);
}
}
return result;
}The scoring favors sockets bound to a specific IP that matches the destination address (score 4) over wildcard bindings (score 2), ensuring predictable routing when multiple IPs are present.
3. Hands‑On Practice
A simple C server that enables SO_REUSEPORT is provided in the linked repository. Running several instances on the same port demonstrates successful binding and kernel‑level load balancing.
$ ./test-server 0.0.0.0 6000
Start server on 0.0.0.0:6000 successed, pid is 23179
$ ./test-server 0.0.0.0 6000
Start server on 0.0.0.0:6000 successed, pid is 23177
...When a client makes multiple connections, the accept counts are evenly distributed among the processes, confirming the kernel’s random load‑balancing.
Server 0.0.0.0:6000 (23179) accept success:15
Server 0.0.0.0:6000 (23177) accept success:25
Server 0.0.0.0:6000 (23185) accept success:20
...Priority tests with two IP addresses show that a process bound to a specific IP receives connections targeting that IP, while a wildcard‑bound process handles the others.
A process: ./test-server 10.0.0.2 6000
B process: ./test-server 0.0.0.0 6000
$ telnet 10.0.0.2 6000 → hits A
$ telnet 10.0.0.3 6000 → hits BCross‑user security is verified: a socket opened by user A cannot be reused by root unless both processes share the same UID, demonstrating the uid_eq safeguard.
$ ./test-server 0.0.0.0 6000 # run as user A
Start server on 0.0.0.0:6000 successed, pid is 30914
# switch to root
# ./test-server 0.0.0.0 6000
Server 30481 Error : Bind Failed!4. Summary
Before Linux 3.9, a port could be bound by only one socket, limiting scalability of multi‑process servers. The REUSEPORT feature introduced in kernel 3.9 allows multiple processes to bind distinct sockets to the same port, with the kernel performing random load balancing to avoid lock contention.
Understanding and enabling REUSEPORT can significantly improve the performance of high‑concurrency backend services, and it is supported by modern Nginx versions via the simple listen 80 reuseport; directive.
Refining Core Development Skills
Fei has over 10 years of development experience at Tencent and Sogou. Through this account, he shares his deep insights on performance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.