Cloud Native 16 min read

Kubernetes Traffic Management and the Emerging Role of Gateway API

This article reviews Kubernetes traffic management, contrasting north‑south and east‑west flows, explains the limitations of Ingress, introduces the newly GA Gateway API, and demonstrates how its role‑based resources and richer expressiveness can become the future standard for cloud‑native networking.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Kubernetes Traffic Management and the Emerging Role of Gateway API

KubeCon NA 2023 highlighted many developments, most notably the graduation of Gateway API to v1.0 GA, which promises to become the next‑generation traffic management solution for Kubernetes.

Kubernetes traffic management is split into two domains: north‑south traffic (external to internal) and east‑west traffic (internal service‑to‑service communication).

North‑south traffic traditionally relies on Service types such as NodePort or LoadBalancer . While simple, these approaches waste ports or IPs and lack advanced features like host‑based routing, authentication, or request rewriting. To address this, Ingress was introduced in Kubernetes v1.1 as a lightweight API that only supports host, path, service, port, and protocol. The core Ingress structs are shown below:

type Ingress struct {
    unversioned.TypeMeta `json:",inline"`
    // Standard object's metadata.
    // More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata
    v1.ObjectMeta `json:"metadata,omitempty"`

    // Spec is the desired state of the Ingress.
    // More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status
    Spec IngressSpec `json:"spec,omitempty"`

    // Status is the current state of the Ingress.
    // More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status
    Status IngressStatus `json:"status,omitempty"`
}

type IngressSpec struct {
    // TODO: Add the ability to specify load‑balancer IP just like what Service has already done?
    // A list of rules used to configure the Ingress.
    // http://
:
/
?
-> IngressBackend
    // Where parts of the url conform to RFC 1738.
    Rules []IngressRule `json:"rules"`
}

type IngressRule struct {
    // Host is the fully qualified domain name of a network host, or its IP
    // address as a set of four decimal digit groups separated by ".".
    // Conforms to RFC 1738.
    Host string `json:"host,omitempty"`

    // Paths describe a list of load‑balancer rules under the specified host.
    Paths []IngressPath `json:"paths"`
}

type IngressPath struct {
    // Path is a regex matched against the url of an incoming request.
    Path string `json:"path,omitempty"`

    // Define the referenced service endpoint which the traffic will be forwarded to.
    Backend IngressBackend `json:"backend"`
}

type IngressBackend struct {
    // Specifies the referenced service.
    ServiceRef v1.LocalObjectReference `json:"serviceRef"`

    // Specifies the port of the referenced service.
    ServicePort util.IntOrString `json:"servicePort,omitempty"`

    // Specifies the protocol of the referenced service.
    Protocol v1.Protocol `json:"protocol,omitempty"`
}`

Ingress only provides a basic usable state and lacks many common capabilities such as request‑header matching or path rewriting. Moreover, an Ingress resource requires a controller to become functional. The controller processes the Ingress rules through several steps: authentication/authorization, admission control (including mutating and validating webhooks), and finally persisting the resource in etcd.

There are over 30 official Ingress controller implementations, each extending the base API with custom annotations or CRDs to provide additional features. These extensions are often incompatible, making migration between controllers difficult.

East‑west traffic is handled via Services, which route requests to Pods. However, Services rely on an overlay network that can introduce latency, prompting some users to bypass Services and connect directly to Pod IPs. Example commands to view Services and Endpoints are shown below:

➜  ~ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.43.0.1
443/TCP   9d
➜  ~ kubectl get endpoints
NAME         ENDPOINTS               AGE
kubernetes   131.83.127.119:6443    9d
➜  ~ kubectl get endpointslices
NAME         ADDRESSTYPE   PORTS   ENDPOINTS          AGE
kubernetes   IPv4          6443    131.83.127.119    9d

Service meshes such as Istio, Linkerd, or Kuma introduce their own CRDs for east‑west traffic management. A sample Kuma TrafficRoute configuration is illustrated below:

apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
mesh: default
metadata:
  name: api-split
spec:
  sources:
    - match:
        kuma.io/service: frontend_default_svc_80
  destinations:
    - match:
        kuma.io/service: backend_default_svc_80
  conf:
    http:
    - match:
        path:
          prefix: "/api"
      split:
      - weight: 90
        destination:
          kuma.io/service: backend_default_svc_80
          version: '1.0'
      - weight: 10
        destination:
          kuma.io/service: backend_default_svc_80
          version: '2.0'
      destination: # default rule is applied when endpoint does not match any rules in http section
        kuma.io/service: backend_default_svc_80
        version: '1.0'

In summary, north‑south traffic suffers from limited expressiveness in Ingress and high migration costs, while east‑west traffic lacks a unified standard, relying on disparate CRDs.

A 2018 survey showed that only 8% of Ingress users avoided adding extra annotations, indicating a strong demand for a more portable and expressive API. This motivated the Kubernetes SIG Network to design the Gateway API, which was announced at KubeCon 2019 and has now reached GA.

Gateway API is role‑based, introducing three core resources:

GatewayClass : defines provider‑level capabilities such as controller name and IP address pools.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: internet
spec:
  controllerName: "example.net/gateway-controller"
  parametersRef:
    group: example.net/v1alpha1
    kind: Config
    name: internet-gateway-config
---
apiVersion: example.net/v1alpha1
kind: Config
metadata:
  name: internet-gateway-config
spec:
  ip-address-pool: internet-vips

Gateway : managed by cluster operators, specifies listeners, ports, and protocols.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: kong-http
spec:
  gatewayClassName: kong
  listeners:
  - name: proxy
    port: 80
    protocol: HTTP

HTTPRoute / TCPRoute / *Route : managed by application developers to define routing, matching, and backend services.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: echo
spec:
  parentRefs:
  - name: kong
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /echo
    backendRefs:
    - name: echo
      kind: Service
      port: 1027

Gateway API offers stronger expressiveness without relying on annotations. For example, URL rewrite can be expressed directly in an HTTPRoute :

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: http-filter-rewrite
spec:
  hostnames:
  - rewrite.example
  rules:
  - filters:
    - type: URLRewrite
      urlRewrite:
        hostname: elsewhere.example
        path:
          type: ReplacePrefixMatch
          replacePrefixMatch: /fennel
    backendRefs:
    - name: example-svc
      weight: 1
      port: 80

The design also allows extensibility via spec.parametersRef , enabling integration with arbitrary custom resources for complex scenarios.

Overall, Gateway API provides a role‑based, expressive, and extensible framework that improves portability across providers and is poised to become the future standard for Kubernetes traffic management.

cloud nativeKubernetestraffic managementIngressGateway API
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.