Cloud Native 24 min read

Deploying Cilium on a KIND Cluster with Helm and Exploring Hubble Observability

This tutorial walks through creating a multi‑node KIND Kubernetes cluster, disabling the default CNI, installing Cilium 1.8.2 via Helm with Hubble enabled, demonstrating eBPF‑based network security and observability, deploying a test application, and verifying CiliumNetworkPolicy effects.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Deploying Cilium on a KIND Cluster with Helm and Exploring Hubble Observability

Google chose Cilium as the next‑generation data plane for GKE because it enhances container security and observability; Cilium is an open‑source project that secures network connectivity between services running on Linux container platforms such as Docker and Kubernetes.

Cilium’s core relies on eBPF, a Linux kernel technology that injects dynamic control logic for visibility and security.

Prepare Cluster

Use KIND to create a local multi‑node Kubernetes cluster and disable the default CNI via a configuration file.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true

Create the cluster with the config and a specific node image.

(MoeLove) ➜  ~ kind create cluster --config=kindconfig  --image=kindest/node:v1.19.0@sha256:6a6e4d588db3c2873652f382465eeadc2644562a64659a1da4

Deploy Cilium

Add the official Cilium Helm repository and install Cilium 1.8.2 with Hubble components enabled.

helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.8.2 \
  --namespace kube-system \
  --set global.nodeinit.enabled=true \
  --set global.kubeProxyReplacement=partial \
  --set global.hostServices.enabled=false \
  --set global.externalIPs.enabled=true \
  --set global.nodePort.enabled=true \
  --set global.hostPort.enabled=true \
  --set global.pullPolicy=IfNotPresent \
  --set config.ipam=kubernetes \
  --set global.hubble.enabled=true \
  --set global.hubble.relay.enabled=true \
  --set global.hubble.ui.enabled=true \
  --set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

Key values explained: global.hubble.enabled=true turns on Hubble observability, global.hubble.metrics.enabled selects which metrics are exposed, and global.kubeProxyReplacement=partial replaces kube‑proxy for selected traffic.

Hubble Observability

Hubble is a fully distributed network‑and‑security observability platform built on eBPF. Use hubble observe to watch live flows and kubectl port-forward to access the UI.

kubectl -n kube-system port-forward svc/hubble-ui 12000:80

After forwarding, open http://127.0.0.1:12000 in a browser to view pods, policies, and traffic details.

Test Application

Deploy a small connectivity‑check demo consisting of a Service echo-a , several Deployments, and two CiliumNetworkPolicy objects (allow and deny).

apiVersion: v1
kind: Service
metadata:
  name: echo-a
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    name: echo-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-a
spec:
  selector:
    matchLabels:
      name: echo-a
  replicas: 1
  template:
    metadata:
      labels:
        name: echo-a
    spec:
      containers:
      - name: echo-container
        image: docker.io/cilium/json-mock:1.0
        imagePullPolicy: IfNotPresent
        readinessProbe:
          exec:
            command: ["curl", "-sS", "--fail", "-o", "/dev/null", "localhost"]
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "pod-to-a-allowed-cnp"
spec:
  endpointSelector:
    matchLabels:
      name: pod-to-a-allowed-cnp
  egress:
  - toEndpoints:
    - matchLabels:
        name: echo-a
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "pod-to-a-l3-denied-cnp"
spec:
  endpointSelector:
    matchLabels:
      name: pod-to-a-l3-denied-cnp
  egress:
  - toEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: kube-system
        k8s:k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: UDP

Apply the manifest and verify pod status.

kubectl apply -f cilium-demo.yaml
kubectl get pods

Use curl from the different pods to demonstrate that the allowed pod can reach echo-a while the denied pod cannot, confirming the CiliumNetworkPolicy effect.

(MoeLove) ➜  ~ kubectl exec pod-to-a-5567c85856-xsg5b -- curl -sI --connect-timeout 5 echo-a
HTTP/1.1 200 OK
...
(MoeLove) ➜  ~ kubectl exec pod-to-a-l3-denied-cnp-7f64d7b7c4-fsxrm -- curl -sI --connect-timeout 5 echo-a
command terminated with exit code 28

Summary

The article shows how to set up a KIND cluster, install Cilium with Helm, explore Hubble observability, deploy a test workload, and validate network policies, providing a practical introduction to eBPF‑based networking and security in a cloud‑native environment.

ObservabilityKuberneteseBPFNetwork SecurityHelmCiliumHubble
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.