Cloud Native 9 min read

MetalLB: Deploying a Load Balancer for Self‑Built Kubernetes Clusters

This article introduces MetalLB, explains its deployment requirements, describes its Layer2 and BGP operation modes, provides step‑by‑step installation and configuration instructions for Kubernetes clusters, and demonstrates verification of the load‑balancing functionality.

DevOps Operations Practice
DevOps Operations Practice
DevOps Operations Practice
MetalLB: Deploying a Load Balancer for Self‑Built Kubernetes Clusters

Self‑built Kubernetes clusters do not provide built‑in load balancing; common external access methods such as Ingress and NodePort have limitations (Ingress lacks TCP support, NodePort uses random ports). MetalLB is an open‑source solution that adds network‑level load balancing to address this gap.

Deployment requirements include a Kubernetes version ≥ 1.13.0 without native load balancing, a pool of IPv4 addresses for MetalLB, optional BGP‑compatible routers for BGP mode, open port 7946 between nodes for Layer 2 mode, and a CNI plugin that supports MetalLB (e.g., Antrea, Calico, Canal, Cilium, Flannel, Kube‑ovn, Kube‑router, Weave Net).

How it works : MetalLB consists of a Controller (deployed as a Deployment) and Speakers (deployed as DaemonSets on each node). The Controller watches Services of type LoadBalancer, allocates an IP from the pool, and manages its lifecycle. Speakers announce the allocated IP via either Layer 2 broadcast or BGP advertisements, while kube‑proxy on the target node forwards traffic to the appropriate Pods.

Layer 2 mode elects a single leader node to receive all traffic for the service IP; this node forwards traffic to Pods via kube‑proxy. The mode provides high availability but can become a bandwidth bottleneck and has slower failover.

BGP mode establishes BGP sessions with routers, allowing each node to advertise the service IP. Routers perform true load balancing across nodes, but hash‑based routing can cause connection redistribution when the node set changes; stable algorithms such as ECMP are recommended.

Installation steps (using Kubernetes manifests, version v0.13.4):

Enable strict ARP mode for kube‑proxy if using IPVS: $ kubectl edit configmap -n kube-system kube-proxy # set strictARP: true

Install MetalLB components: $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml

Configure the desired mode.

Layer 2 configuration :

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.214.50-192.168.214.80   # IP pool for LB
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2adver
  namespace: metallb-system

BGP configuration (example AS 64500, peer AS 64501, router 10.0.0.1, IP pool 192.168.10.0/24):

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: sample
  namespace: metallb-system
spec:
  myASN: 64500
  peerASN: 64501
  peerAddress: 10.0.0.1
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.10.0/24
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: bgpadver
  namespace: metallb-system

Verification (Layer 2 example): create a Service of type LoadBalancer and a Deployment, apply them, then check that the Service receives an external IP and is reachable from a browser.

apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.4
        ports:
        - containerPort: 80

After applying the manifests, the Service obtains an external IP, which can be accessed via a browser to confirm that MetalLB is correctly load‑balancing traffic.

cloud nativeDeploymentKubernetesBGPload balancerlayer2MetalLB
DevOps Operations Practice
Written by

DevOps Operations Practice

We share professional insights on cloud-native, DevOps & operations, Kubernetes, observability & monitoring, and Linux systems.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.