Deploying and Using MetalLB for Load Balancing in Self‑Managed Kubernetes Clusters
This article introduces MetalLB, an open‑source load‑balancing solution for self‑managed Kubernetes clusters, explains its deployment requirements, operating principles, Layer 2 and BGP modes, and provides step‑by‑step installation and verification instructions using Kubernetes manifests, Helm, and Kustomize.
MetalLB is an open‑source load‑balancing solution designed to fill the gap in self‑built Kubernetes clusters that lack native load‑balancing capabilities; traditional alternatives such as Ingress and NodePort have limitations like no TCP support or random ports.
Deployment requirements
Kubernetes version 1.13.0 or newer without built‑in network load balancing.
A pool of IPv4 addresses for MetalLB to allocate.
If using BGP mode, one or more BGP‑capable routers.
For Layer 2 mode, nodes must allow traffic on port 7946 for speaker communication.
The cluster’s CNI plugin must be compatible (e.g., Antrea, Calico, Canal, Cilium, Flannel, Kube‑ovn, etc.).
Working principle
MetalLB consists of two components: a Controller (deployed as a Deployment) and a Speaker (deployed as a DaemonSet on every node). The Controller watches Service objects; when a Service is set to LoadBalancer , it assigns an IP from the configured pool and manages its lifecycle. The Speaker announces the assigned IP using either Layer 2 ARP or BGP, enabling external traffic to reach the Service, after which kube‑proxy forwards the traffic to the appropriate Pods.
Modes
Layer 2 mode : One node is elected as the leader and handles all traffic for the Service IP. This provides high availability but creates a single‑node bandwidth bottleneck and slower failover because a new leader must be elected.
BGP mode : Each node establishes a BGP session with a router, allowing the router to balance traffic across all nodes. It offers true load balancing but can suffer from hash‑based flow redistribution when the backend set changes; using stable algorithms such as ECMP is recommended.
Installation steps
1. Enable strict ARP mode for kube‑proxy (required for IPVS clusters from v1.14.2):
$ kubectl edit configmap -n kube-system kube-proxy
# set strictARP to true
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true2. Install MetalLB components (default namespace metallb-system ):
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml3. Configure the desired mode.
Layer 2 configuration
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.214.50-192.168.214.80 # IP pool for LB
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2adver
namespace: metallb-systemBGP configuration
Create a BGPPeer:
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: sample
namespace: metallb-system
spec:
myASN: 64500
peerASN: 64501
peerAddress: 10.0.0.1Create an IP address pool:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.10.0/24Create a BGPAdvertisement:
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: bgpadver
namespace: metallb-systemFunction verification
Deploy a sample Service of type LoadBalancer and a corresponding Deployment:
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.19.4
ports:
- containerPort: 80After applying the manifests, the Service obtains an external IP from MetalLB; accessing that IP from a browser confirms successful load‑balancing.
Thank you for reading; feel free to like, share, or follow for more technical content.
DevOps Operations Practice
We share professional insights on cloud-native, DevOps & operations, Kubernetes, observability & monitoring, and Linux systems.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.