Cloud Native 5 min read

Implementing Load Balancing with Kubernetes Ingress: Principles and Practical Examples

This article explains the concept of load balancing, describes how Kubernetes Ingress controllers implement it, and provides step‑by‑step YAML examples for deploying Nginx Ingress, configuring basic routing, sticky sessions, custom load‑balancing algorithms, and external traffic policies to achieve flexible traffic distribution in a cloud‑native environment.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Implementing Load Balancing with Kubernetes Ingress: Principles and Practical Examples

Load balancing distributes traffic across multiple servers to improve performance and availability.

Kubernetes uses Ingress resources and Ingress Controllers (e.g., Nginx, Traefik, HAProxy) to implement load balancing based on defined rules and algorithms such as round‑robin, IP hash, or least connections.

Practical example – Deploy Nginx Ingress Controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml

Create a basic Ingress resource that routes traffic to frontend‑service and api‑service :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my‑ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend‑service
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api‑service
            port:
              number: 80

Enable sticky sessions by adding the nginx.ingress.kubernetes.io/affinity: "cookie" annotation:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my‑ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend‑service
            port:
              number: 80

Change the load‑balancing algorithm by setting nginx.ingress.kubernetes.io/affinity-mode: "balanced" :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my‑ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "balanced"
spec:
  ... (same rules as above) ...

Use an external load balancer with the nginx.ingress.kubernetes.io/external-traffic-policy: "Local" annotation to direct traffic to local nodes:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my‑ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/external-traffic-policy: "Local"
spec:
  ... (same rules) ...

By configuring Ingress rules and selecting appropriate algorithms or policies, developers can achieve flexible traffic distribution, session persistence, and integration with external load balancers, thereby enhancing application performance and reliability in Kubernetes environments.

cloud nativeKubernetesLoad BalancingNginxyamlIngressSticky Sessions
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.