Running Lightweight Kubernetes (K3s) on a Local Machine with Grafana and Prometheus
This article explains how to install and run the lightweight, certified Kubernetes distribution K3s on a workstation, using k3d as a wrapper, and demonstrates deploying operators, Prometheus, and Grafana with full command‑line examples and configuration files.
This article introduces K3s, a lightweight, certified Kubernetes distribution designed for resource‑constrained environments such as edge devices and IoT, developed by Rancher Labs. It highlights K3s’s small binary size, low memory and CPU usage, easy installation across Linux, macOS, and Windows, high availability features, and full compatibility with the Kubernetes API, including built‑in security mechanisms.
K3s Main Features and Characteristics
Lightweight and resource‑efficient: smaller memory footprint, binary size, and CPU overhead compared to standard Kubernetes.
Easy to install and manage: single binary installation with containerd as the default container runtime.
High availability and resilience: supports standard HA features, automatic etcd snapshots, control‑plane auto‑scaling, and integrated load balancing.
Security and compatibility: full Kubernetes API compatibility, built‑in TLS, RBAC, Seccomp, and AppArmor support.
Typical Use Cases for K3s
Edge computing: deploy and manage containerized workloads close to data sources.
IoT deployments: provide Kubernetes capabilities on devices with limited resources.
Development and testing environments: run a local Kubernetes cluster on a laptop or workstation.
Small‑scale production: simplify installation and reduce resource consumption for modest workloads.
Overall, K3s offers a lightweight, easy‑to‑use, and resource‑efficient Kubernetes distribution that excels in edge, IoT, development, testing, and small‑scale deployment scenarios.
Install k3d – the Wrapper for K3s
(base) skondla@Sams-MBP:Downloads $ brew search k3d
==> Formulae
k3d ✔ f3d
# k3d is already installed on my macbook
(base) skondla@Sams-MBP:Downloads $ brew update && brew install k3d
Updated 3 taps (weaveworks/tap, homebrew/core and homebrew/cask).
==> New Formulae
bbot erlang@25 trzsz-ssh
==> New Casks
whisky
==> Outdated Formulae
aws-iam-authenticator eksctl libuv
You have 3 outdated formulae installed.
You can upgrade them with brew upgrade
or list them with brew outdated.
Warning: k3d 5.5.1 is already installed and up-to-date.
To reinstall 5.5.1, run:
brew reinstall k3d
(base) skondla@Sams-MBP:Downloads $ which k3d
/usr/local/bin/k3d (base) skondla@Sams-MBP:~ $ k3d cluster create devhacluster --servers 3 --agents 1
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-devhacluster'
INFO[0000] Created image volume k3d-devhacluster-images
INFO[0000] Starting new tools node...
INFO[0000] Creating initializing server node
INFO[0000] Creating node 'k3d-devhacluster-server-0'
INFO[0000] Starting Node 'k3d-devhacluster-tools'
INFO[0001] Creating node 'k3d-devhacluster-server-1'
INFO[0002] Creating node 'k3d-devhacluster-server-2'
INFO[0002] Creating node 'k3d-devhacluster-agent-0'
INFO[0002] Creating LoadBalancer 'k3d-devhacluster-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] Starting new tools node...
INFO[0002] Starting Node 'k3d-devhacluster-tools'
INFO[0003] Starting cluster 'devhacluster'
INFO[0003] Starting the initializing server...
INFO[0004] Starting Node 'k3d-devhacluster-server-0'
INFO[0005] Starting servers...
INFO[0005] Starting Node 'k3d-devhacluster-server-1'
INFO[0027] Starting Node 'k3d-devhacluster-server-2'
INFO[0040] Starting agents...
INFO[0040] Starting Node 'k3d-devhacluster-agent-0'
INFO[0042] Starting helpers...
INFO[0042] Starting Node 'k3d-devhacluster-serverlb'
INFO[0049] Injecting records for hostAliases (incl. host.k3d.internal) and for 6 network members into CoreDNS configmap...
INFO[0051] Cluster 'devhacluster' created successfully!
INFO[0051] You can now use it like this:
kubectl cluster-info (base) skondla@Sams-MBP:~ $ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-devhacluster-agent-0 Ready
76s v1.26.4+k3s1 172.23.0.6
K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-0 Ready control-plane,etcd,master 109s v1.26.4+k3s1 172.23.0.3
K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-1 Ready control-plane,etcd,master 92s v1.26.4+k3s1 172.23.0.4
K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1
k3d-devhacluster-server-2 Ready control-plane,etcd,master 79s v1.26.4+k3s1 172.23.0.5
K3s dev 5.15.49-linuxkit-pr containerd://1.6.19-k3s1 (base) skondla@Sams-MBP:~ $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
... (additional resource creation output) ...
deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Package server phase: Succeeded
deployment "packageserver" successfully rolled outCheck namespaces:
(base) skondla@Sams-MBP:~ $ k get ns
NAME STATUS AGE
default Active 21m
flaskapp1-namespace Active 12m
kube-node-lease Active 21m
kube-public Active 21m
kube-system Active 21m
olm Active 36s
operators Active 36s
rabbitmq-system Active 16mDeploy Prometheus:
(base) skondla@Sams-MBP:~ $ kubectl create -f https://operatorhub.io/install/prometheus.yaml
subscription.operators.coreos.com/my-prometheus created
... (status output) ...Deploy Grafana (YAML manifest):
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:9.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
sessionAffinity: None
type: LoadBalancerStart port‑forward to access Grafana UI:
(base) skondla@Sams-MBP:grafana $ kubectl port-forward service/grafana 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000The article concludes with screenshots of the Grafana dashboard showing metrics collected from the K3s cluster.
DevOps Cloud Academy
Exploring industry DevOps practices and technical expertise.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.