Cloud Native 9 min read

Performance Tuning of Nginx Ingress Controller on Tencent Cloud TKE

This article explains how to optimize Nginx Ingress Controller on Tencent Cloud TKE for high‑concurrency workloads by adjusting kernel parameters, sysctl settings, and Nginx configuration values, and provides concrete initContainer and ConfigMap examples to apply these optimizations.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Performance Tuning of Nginx Ingress Controller on Tencent Cloud TKE

Nginx Ingress Controller implements the Kubernetes Ingress API using Nginx, a high‑performance gateway, but without proper tuning its full potential cannot be realized; this article expands on the best‑practice deployment by detailing kernel‑level and Nginx‑level optimizations for high‑traffic scenarios.

Kernel parameter tuning includes increasing the connection queue size (net.core.somaxconn), expanding the source port range (net.ipv4.ip_local_port_range), enabling TIME_WAIT reuse (net.ipv4.tcp_tw_reuse), and raising the maximum file descriptor limit (fs.file-max). In TKE the default somaxconn is 4096, and the article recommends setting it to 65535, the port range to 1024‑65535, and the file‑max to 1,048,576.

Example of adjusting the Nginx listen backlog directly:

server {
    listen  80  backlog=1024;
    ...
}

The controller automatically reads somaxconn and applies it as the backlog, so setting the kernel value is sufficient.

Applying kernel settings via initContainers can be done by adding an initContainer that runs sysctl commands before the Nginx pod starts:

initContainers:
- name: setsysctl
  image: busybox
  securityContext:
    privileged: true
  command:
  - sh
  - -c |
    sysctl -w net.core.somaxconn=65535
    sysctl -w net.ipv4.ip_local_port_range="1024 65535"
    sysctl -w net.ipv4.tcp_tw_reuse=1
    sysctl -w fs.file-max=1048576

Global Nginx configuration tuning is performed through a ConfigMap watched by the Ingress controller. Key parameters include:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-controller
# nginx ingress performance optimization
data:
  keep-alive-requests: "10000"
  upstream-keepalive-connections: "200"
  max-worker-connections: "65536"

These settings raise the maximum requests per keep‑alive connection, increase the number of idle upstream connections, and allow each worker to handle more simultaneous connections, all of which help mitigate TIME_WAIT buildup and improve throughput under heavy load.

The article concludes that combining kernel‑level sysctl adjustments with Nginx configuration tweaks yields a robust setup for high‑concurrency services on TKE.

References

Nginx Ingress on TKE best practice: https://mp.weixin.qq.com/s/NAwz4dlsPuJnqfWYBHkfGg

Nginx Ingress configuration guide: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/

Tuning NGINX for Performance: https://www.nginx.com/blog/tuning-nginx/

ngx_http_upstream_module documentation: http://nginx.org/en/docs/http/ngx_http_upstream_module.html

KubernetesPerformance TuningNginxingresstkesysctl
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.