Cloud Native 9 min read

How to Double Nginx‑Ingress Performance in Kubernetes: IPVS, Kernel Tweaks, and Keep‑Alive Optimizations

This guide explains why routing traffic through an Nginx Ingress controller can halve Kubernetes service QPS and provides step‑by‑step instructions—including switching to IPVS, tuning kernel and file‑descriptor limits, and configuring keep‑alive settings—to restore and even exceed the original performance.

Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
How to Double Nginx‑Ingress Performance in Kubernetes: IPVS, Kernel Tweaks, and Keep‑Alive Optimizations

Problem Overview

When a workload is exposed via a Kubernetes NodePort service, load testing shows a high QPS (over 100k). However, exposing the same service through an nginx‑ingress‑controller reduces QPS to about 50k, indicating a significant performance penalty introduced by the Ingress layer.

Ingress Request Flow

The Nginx Ingress controller watches Ingress resources, generates corresponding Nginx virtual‑host and reverse‑proxy configurations, and then Nginx processes incoming HTTP requests as follows:

client → nginx → upstream (Kubernetes Service) → pods

Because Nginx must parse the HTTP request at layer 7 before forwarding it, this adds overhead compared to a direct service request.

Optimization Steps

1) Switch kube‑proxy to IPVS mode

Install the required packages and load the IPVS kernel modules:

# yum install -y ipset ipvsadm
# cat <<'EOF' > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules=(ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack_ipv4)
for kernel_module in ${ipvs_modules[*]}; do
  /sbin/modinfo -Ffilename ${kernel_module} >/dev/null 2>&1 && /sbin/modprobe ${kernel_module}
done
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules

Enable IPVS in the kube‑proxy ConfigMap:

# kubectl -n kube-system edit cm kube-proxy
# set mode: "ipvs"

2) Tune kernel parameters

Add the following lines to /etc/sysctl.conf and apply them:

net.core.somaxconn = 655350
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_tw_buckets = 5000
net.nf_conntrack_max = 2097152
net.netfilter.nf_conntrack_max = 2097152
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 15
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_established = 1200
EOF

# sysctl -p --system

3) Increase file‑descriptor limits

Set the soft and hard limits for all users and the root account:

ulimit -n 655350
# /etc/security/limits.conf
* hard nofile 655350
* soft nofile 655350
root hard nofile 655350
root soft nofile 655350
# Ensure PAM reads the limits
echo 'session required pam_limits.so' >> /etc/pam.d/common-session

4) Optimize Nginx Ingress controller configuration

Edit the nginx‑configuration ConfigMap in the kube‑system namespace and set keep‑alive related parameters:

# kubectl -n kube-system edit configmap nginx-configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: kube-system
data:
  keep-alive: "60"
  keep-alive-requests: "100"
  upstream-keepalive-connections: "10000"
  upstream-keepalive-requests: "100"
  upstream-keepalive-timeout: "60"

After applying these changes, load‑testing shows a substantial QPS increase, often matching or exceeding the original NodePort performance.

Why Keep‑Alive Matters

HTTP keep‑alive (persistent connections) allows a single TCP connection to serve multiple HTTP requests, eliminating the overhead of establishing and tearing down connections for each request. Benefits include fewer TCP handshakes, reduced CPU/memory usage on hosts and routers, lower network congestion, and more graceful error handling.

In the original benchmark, the ab tool did not use the -k flag, so the client‑to‑nginx path did not benefit from keep‑alive. However, the Ingress controller’s upstream keep‑alive settings enabled connection reuse between Nginx and the Kubernetes Service, dramatically reducing latency and improving throughput.

Conclusion

By switching kube‑proxy to IPVS, tuning kernel and file‑descriptor limits, and enabling aggressive keep‑alive settings in the Nginx Ingress controller, you can recover the performance loss caused by the Ingress layer and often achieve double the original QPS.

Keep-AliveSysctlkube-proxyNGINX Ingress
Full-Stack DevOps & Kubernetes
Written by

Full-Stack DevOps & Kubernetes

Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.