Cloud Native 14 min read

Understanding Kubernetes Ingress NGINX: Architecture, Configuration, and Reload Mechanisms

This article explains the purpose, architecture, configuration steps, TLS setup, validation, technical selection, controller operation, reload process, high‑availability design, customization options, and future roadmap of the Kubernetes Ingress NGINX solution for seven‑layer load balancing.

360 Smart Cloud
360 Smart Cloud
360 Smart Cloud
Understanding Kubernetes Ingress NGINX: Architecture, Configuration, and Reload Mechanisms

Overview – When using Kubernetes (K8s) for container orchestration, direct PodIP access is unsuitable for load balancing and high availability, so K8s provides Layer 4 and Layer 7 solutions; the article focuses on the Layer 7 Ingress implementation.

Functionality – Access the Ingress creation UI via Project → Application List → Traffic Access → Ingress, define domain, path matching, and backend Service, optionally add annotations for advanced needs, and configure HTTPS/TLS using a kubernetes.io/tls Secret containing cert and key.

Technical Selection – Compares three load‑balancing options: NodePort (simple but limited ports), LoadBalancer (flexible but IP‑intensive), and Ingress (seven‑layer routing, TLS, rate‑limiting, etc.). The chosen solution is Ingress NGINX.

Ingress Resource Example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
              number: 80

This YAML shows how host, path, and backend service are specified.

Ingress‑NGINX Architecture – Consists of the Ingress resource, IngressController (adapter), and the NGINX gateway. The controller watches K8s resources, generates NGINX configuration files, writes TLS secrets, and reloads NGINX when needed.

Controller Model – The controller pod runs three processes: IC process (generates config), NGINX master (manages workers), and NGINX workers (handle client traffic). Interaction with Prometheus, K8s API, kubelet, and file I/O is detailed.

Ingress Creation Flow – User creates Ingress → IC detects change → generates new config → triggers NGINX reload (graceful reload using HUP signal, spawning new workers while old workers finish existing requests).

Reload Scenarios – Adding/removing Ingress, TLS sections, paths, or deleting resources triggers reload; endpoint‑only changes can be handled via Lua without reload.

NGINX Reload Mechanics – Master parses new config, forks new workers, sends QUIT to old workers, and ensures zero‑downtime traffic handling.

High Availability – Deploy IC as a DaemonSet on master nodes, fronted by LVS for load balancing across multiple controller replicas.

Customization – Advanced needs can be met by modifying the nginx.tmpl ConfigMap/template, though this requires careful compatibility management.

Future Plans – Separate Ingress‑NGINX master from IC, deploy multiple IC replicas, and monitor the evolution of the Gateway API as a potential successor to Ingress.

cloud nativeKubernetesLoad BalancingNginxTLSIngress
360 Smart Cloud
Written by

360 Smart Cloud

Official service account of 360 Smart Cloud, dedicated to building a high-quality, secure, highly available, convenient, and stable one‑stop cloud service platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.