Operations 28 min read

Choosing the Right Ingress Controller: Nginx, Traefik, or Envoy?

This guide provides a deep technical comparison of Nginx Ingress Controller, Traefik, and Envoy Proxy, covering architecture, configuration, performance, feature sets, deployment patterns, security hardening, monitoring, and troubleshooting to help operators select the best solution for their Kubernetes clusters.

Ops Community
Ops Community
Ops Community
Choosing the Right Ingress Controller: Nginx, Traefik, or Envoy?

Ingress basics

Kubernetes assigns each Pod a unique IP; Services provide stable virtual IPs and load‑balancing. Ingress adds layer‑7 (HTTP/HTTPS) routing, while ClusterIP, NodePort, and LoadBalancer handle lower‑level exposure.

Nginx Ingress Controller

Architecture : Deployable as a Deployment or DaemonSet; supports Horizontal Pod Autoscaler (HPA) for scaling.

Configuration : Uses standard Ingress resources plus Nginx‑specific annotations (e.g., nginx.ingress.kubernetes.io/limit-rps, nginx.ingress.kubernetes.io/rewrite-target) and a global ConfigMap for defaults.

Features : TLS termination, rate limiting, canary releases, health checks, Prometheus metrics.

Performance : Benchmarks show >50k QPS with ~5 ms P99 latency.

Typical Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      containers:
      - name: controller
        image: registry.k8s.io/ingress-nginx/controller:v1.9.4
        args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
          initialDelaySeconds: 10
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
          periodSeconds: 5
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
          limits:
            cpu: 1
            memory: 1Gi

Configuration examples

Ingress with annotations for TLS redirect, rate limiting, and body size:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
spec:
  ingressClassName: nginx
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        pathType: Exact
        backend:
          service:
            name: tea-svc
            port:
              number: 80
      - path: /coffee
        pathType: Prefix
        backend:
          service:
            name: coffee-svc
            port:
              number: 80

Global ConfigMap for default Nginx settings (e.g., body size, timeouts, gzip):

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  proxy-body-size: "50m"
  proxy-connect-timeout: "30"
  proxy-read-timeout: "60"
  enable-gzip: "true"
  gzip-level: "6"
  gzip-types: "application/json application/javascript application/xml text/css text/html text/javascript"

TLS secret creation (self‑signed example):

# Generate key and cert
openssl genrsa -out tls.key 2048
openssl req -new -x509 -key tls.key -out tls.crt -days 365 -subj "/CN=example.com/O=MyOrg"
# Create Kubernetes secret
kubectl create secret tls example-tls --cert=tls.crt --key=tls.key

Canary release using annotations:

# Main service Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: main-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: main-service
            port:
              number: 80
---
# Canary Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: canary-ingress
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "30"
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: canary-service
            port:
              number: 80

Rate limiting via annotations or ConfigMap:

# Ingress‑level rate limit
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rate-limit-ingress
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/limit-connections: "50"
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80

Traefik Ingress Controller

Architecture : Cloud‑native reverse proxy; dynamic configuration without process reload. Components: Provider (source of config), Router (matches requests), Middleware (modifies requests), Service (forwards to backends).

Deployment : Typical Deployment with two replicas, exposing ports 80/443, enabling the Kubernetes Ingress and CRD providers.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  namespace: ingress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-ingress-controller
      containers:
      - name: traefik
        image: traefik:v3.0.4
        args:
        - --api.insecure
        - --accesslog
        - --entrypoints.http.address=:80
        - --entrypoints.https.address=:443
        - --providers.kubernetesingress
        - --providers.kubernetescrd
        - --log.level=INFO
        - --metrics.prometheus
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: admin
          containerPort: 8080
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi

Traefik uses CRDs for richer routing. Example IngressRoute:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: demo-ingressroute
  namespace: default
spec:
  entryPoints:
  - web
  - websecure
  routes:
  - match: Host(`demo.example.com`) && PathPrefix(`/api`)
    kind: Rule
    services:
    - name: api-service
      port: 80
    middlewares:
    - name: strip-api-prefix
    - name: rate-limit
  - match: Host(`demo.example.com`) && PathPrefix(`/`)
    kind: Rule
    services:
    - name: frontend-service
      port: 80
    tls:
      secretName: demo-tls-cert

Common Middleware examples:

Basic authentication (uses a Secret with htpasswd format).

IP whitelist.

Rate limiting (average, burst, period).

Retry (attempts, interval).

Redirect to HTTPS.

StripPrefix.

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: basic-auth
spec:
  basicAuth:
    secret: basic-auth-secret

Dynamic service discovery via Providers (e.g., Consul) can be configured in a ConfigMap.

Envoy (via Contour)

Architecture : C++ edge proxy using xDS API for hot configuration updates. Core concepts: Listener, Route, Cluster, Endpoint, Filter, HealthCheck.

Deployment : Usually deployed as a DaemonSet together with Contour, which translates Kubernetes resources into Envoy configuration.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: contour
  namespace: projectcontour
spec:
  replicas: 2
  selector:
    matchLabels:
      app: contour
  template:
    metadata:
      labels:
        app: contour
    spec:
      containers:
      - name: contour
        image: ghcr.io/projectcontour/contour:v1.28.2
        command:
        - contour
        - serve
        - --xds-address=0.0.0.0
        - --xds-port=8001
        - --envoy-http-port=8080
        - --envoy-https-port=8443
        - --config-path=/config/contour.yaml
        ports:
        - name: xds
          containerPort: 8001
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8001
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8001
          periodSeconds: 5
      - name: envoy
        image: ghcr.io/projectcontour/contour:v1.28.2
        command:
        - envoy
        - -c
        - /config/envoy.json
        - --service-cluster
        - projectcontour
        - --service-node
        - $(NODE_NAME)
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443

Contour extends Ingress with HTTPProxy CRD for advanced features such as CORS, health checks, weighted load balancing, retry policies, and rate limiting.

apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: demo-proxy
  namespace: default
spec:
  virtualhost:
    fqdn: demo.example.com
    cors:
      allow-credentials: true
      allow-headers:
      - X-Custom-Header
      allow-methods:
      - GET
      - POST
      - PUT
      - DELETE
      allow-origin:
      - "https://allowed.example.com"
      max-age: "86400"
  routes:
  - conditions:
    - prefix: /api
    services:
    - name: api-service
      port: 80
    healthcheck:
      path: /health
      interval: 10s
      timeout: 5s
    loadBalancerPolicy:
      strategy: WeightedLeastRequest
    retryPolicy:
      retryOn: gateway-error,connect-failure,reset
      numRetries: 3
      perTryTimeout: 10s
  - conditions:
    - prefix: /
    services:
    - name: frontend-service
      port: 80
    rateLimit:
      global:
        descriptors:
        - entries:
          - key: remote_addr
            rateLimitValue:
              requests: 100
              unit: minute
    tcpproxy:
      services:
      - name: tcp-service
        port: 9000
        weight: 1

Envoy rate‑limiting can be defined via TLSPolicy or annotations on Services.

apiVersion: projectcontour.io/v1
kind: TLSPolicy
metadata:
  name: ratelimit-policy
  namespace: default
spec:
  limits:
  - units: second
    requests: 100
    condition:
    - requestHeader:
        headerName: X-Forwarded-For
        count: 1

Health checks (active) and load‑balancing strategies (RoundRobin, WeightedLeastRequest, Random, Cookie, Header) are configurable in HTTPProxy.

Comparative overview

Performance : Nginx highest throughput (~50k QPS, 5 ms P99), Traefik ~35k QPS, 8 ms, Envoy (via Contour) ~40k QPS, 6 ms. Memory usage: Nginx ~150 MiB, Traefik ~200 MiB, Envoy ~300 MiB.

Feature set : All support L7/L4 routing, WebSocket, gRPC, TCP/UDP, TLS termination, and automatic HTTPS (via Cert‑Manager for Nginx/Envoy, built‑in for Traefik). Rate limiting: Nginx via annotations/ConfigMap, Traefik via Middleware, Envoy via global ConfigMap. Canary releases: Nginx annotations, Envoy HTTPProxy, not native in Traefik. Circuit breaking: only Envoy.

Configuration complexity : Nginx – familiar to Nginx admins; Traefik – declarative CRDs simplify config; Envoy – richest feature set but steep learning curve, mitigated by Contour.

Ecosystem : Nginx has mature enterprise support; Traefik offers active community and commercial edition; Envoy is core of service‑mesh projects (Istio, Linkerd) and CNCF.

Selection guidance

Small‑to‑medium clusters (< 1000 pods): Nginx or Traefik for simplicity and mature tooling.

High‑traffic, performance‑critical workloads: Nginx for raw speed; Envoy when fine‑grained traffic control is required.

Complex traffic management (circuit breaking, fault injection, advanced retries): Envoy (Contour) provides the most capabilities.

Service‑mesh integration: Envoy is the de‑facto data plane.

Best‑practice deployment patterns

Deploy Ingress Controllers as DaemonSets for low‑latency local routing.

Use HorizontalPodAutoscaler based on CPU/memory utilization.

Expose Prometheus metrics via a Service and ServiceMonitor; recommended Grafana dashboards: Nginx Ingress (ID 9614), Traefik (ID 10162), Envoy (ID 13261).

Security hardening

Enforce TLS with HSTS, add security headers via nginx.ingress.kubernetes.io/server-snippet, and store certificates in Secrets.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/hsts-max-age: "31536000"
    nginx.ingress.kubernetes.io/hsts-include-subdomains: "true"
    nginx.ingress.kubernetes.io/server-snippet: |
      add_header X-Frame-Options "SAMEORIGIN" always;
      add_header X-Content-Type-Options "nosniff" always;
      add_header X-XSS-Protection "1; mode=block" always;
      add_header Referrer-Policy "no-referrer-when-downgrade" always;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - example.com
    secretName: example-tls
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80

Troubleshooting checklist

Verify Ingress resource exists and has an address.

Check Service and Endpoints readiness.

Inspect controller logs for errors.

Validate TLS Secrets and Cert‑Manager status.

Confirm rate‑limit annotations or ConfigMap values.

Use kubectl top pods and controller status endpoints to diagnose performance bottlenecks.

Analyze access logs with a custom log format to spot slow requests.

MonitoringKubernetessecurityservice meshIngressEnvoyTraefik
Ops Community
Written by

Ops Community

A leading IT operations community where professionals share and grow together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.