What’s New in Cilium 1.11? Service Mesh, BGP, XDP and More
Cilium 1.11 introduces a beta Service Mesh, Kubernetes Ingress support, OpenTelemetry integration, topology‑aware load balancing, BGP pod‑CIDR announcements, managed IPv4/IPv6 neighbor discovery, XDP multi‑device acceleration, graceful termination, scalable ID spaces, endpoint slices and several feature enhancements and deprecations.
Service Mesh (Beta)
Cilium now offers a beta Service Mesh built on eBPF, providing L7 traffic management, load balancing, TLS termination, canary releases and tracing. The feature is developed on a separate branch and is expected to merge into the mainline before the 1.12 release.
Cilium 1.11 Overview
The 1.11 release adds many Kubernetes‑related capabilities, including a standalone load balancer, enhanced observability, and new networking features.
What is Cilium?
Cilium is an open‑source project that uses eBPF to provide transparent networking, security, and API connectivity for container workloads on Kubernetes. It implements multi‑cluster routing, kube‑proxy replacement, transparent encryption, and integrates tightly with Envoy.
OpenTelemetry Support
Hubble now exports tracing and metric data in OpenTelemetry format, allowing integration with back‑ends such as Jaeger. An OpenTelemetry adapter can be deployed alongside Cilium (v1.11 or newer) and is typically installed via the OpenTelemetry Operator.
Topology‑Aware Load Balancing
Cilium leverages Kubernetes topology‑aware hints to route traffic to the nearest endpoint (node, rack, zone, region) and prefers same‑region endpoints, reducing cross‑zone traffic and associated costs.
Kubernetes APIServer Policy Matching
A new policy entity enables simple creation of policies that control traffic to and from the Kubernetes API server, using automatic entity selectors for the reserved kube-apiserver label.
BGP Pod CIDR Announcement
Cilium can announce Pod CIDR routes via BGP to external routers, integrating with existing data‑center networking. Configuration is performed through a ConfigMap and the cilium install --config="bgp-announce-pod-cidr=true" flag.
apiVersion: v1
kind: ConfigMap
metadata:
name: bgp-config
namespace: kube-system
data:
config.yaml: |
peers:
- peer-address: 192.168.1.11
peer-asn: 64512
my-asn: 64512Managed IPv4/IPv6 Neighbor Discovery
When eBPF replaces kube‑proxy, Cilium now relies on the Linux kernel to discover L2 neighbors, supporting both IPv4 and IPv6. The kernel‑managed neighbor entries are marked as “managed” and kept in a REACHABLE state, eliminating the previous PERMANENT entries.
XDP Support for Multi‑Device Load Balancer
The load balancer now supports XDP_REDIRECT on multiple network devices, enabling high‑performance packet processing across bonded or multi‑NIC setups.
XDP Transparent Support for Bond Devices
XDP_TX semantics are applied to bond interfaces, allowing traffic to be distributed across bonded slaves with failover and LACP support.
Route‑Based Device Detection
Device detection now examines all routing table entries in the host namespace, automatically selecting appropriate interfaces without requiring the devices option.
Graceful Termination of Service Backend Traffic
Cilium watches EndpointSlice updates; when a pod is terminating, Cilium removes it from new load‑balancing decisions while allowing existing connections to finish within a configurable grace period.
Egress Gateway Optimizations
The egress gateway now supports direct routing, internal‑traffic distinction, and shared egress IPs across policies, fixing earlier issues with reply classification and improving test coverage.
Kubernetes Cgroup Enhancements
Cilium’s eBPF programs now attach to socket hooks in cgroup v2, and recent kernel patches allow safe coexistence of cgroup v1 and v2, improving compatibility with modern runtimes.
Scalable Load Balancer ID Space
Endpoint ID allocation and datapath mappings have been expanded from 16‑bit to 32‑bit, allowing clusters with over 64 k endpoints to scale reliably.
Cilium Endpoint Slices
A new CRD, CiliumEndpointSlice, aggregates endpoint information per namespace, reducing watch traffic on the kube‑apiserver and improving scalability, especially in large clusters.
Kube‑Proxy Replacement with Istio Support
Cilium can now operate alongside Istio sidecars by performing eBPF‑based DNAT for east‑west traffic while preserving socket‑based load balancing for north‑south traffic.
Feature Enhancements and Deprecations
Host Firewall graduated to stable.
Consul KVStore backend deprecated in favor of Etcd/Kubernetes.
IPVLAN as a veth alternative deprecated.
Policy Tracing deprecated; replaced by network‑policy editors and policy verdicts.
For full details, refer to the official Cilium documentation.
Qingyun Technology Community
Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
