Master Kubernetes Security: From RBAC to Network Policies

This guide explains why Kubernetes security is critical, presents a layered defense architecture, and provides practical steps—including RBAC least‑privilege enforcement, network‑policy zero‑trust design, Pod Security Standards, monitoring rules, and automation scripts—to harden production clusters while avoiding common pitfalls.

Raymond Ops
Raymond Ops
Raymond Ops
Master Kubernetes Security: From RBAC to Network Policies

Why Kubernetes Security Matters

According to statistics, 90% of Kubernetes security incidents stem from misconfigured permissions and missing network boundaries, making robust security essential for data protection and service stability.

Core Protection Architecture

┌─────────────────────────────────────────┐
│              API Server                │
├─────────────────┬───────────────────────┤
│      RBAC       │   Network Policy       │
│  Permission    │   Network Isolation    │
├─────────────────┼───────────────────────┤
│ Pod Security   │   Service Mesh         │
│ Container Sec  │   Traffic Encryption  │
└─────────────────┴───────────────────────┘

First Defense: Fine‑Grained RBAC

1. Enforce Least‑Privilege

Bad practice: Many operators grant cluster-admin to applications for convenience.

# ❌ Dangerous binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dangerous-binding
subjects:
- kind: ServiceAccount
  name: my-app
roleRef:
  kind: ClusterRole
  name: cluster-admin  # Over‑privileged!

Correct practice: Define only the permissions required by the workload.

# ✅ Secure binding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
  resourceNames: ["my-app-*"]  # Restrict scope

2. Dynamic Permission Auditing Script

#!/bin/bash
# Scan for over‑privileged bindings

echo "🔍 Scanning for excessive permissions..."
# Check cluster‑admin bindings
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name + " -> " + (.subjects[]?.name // "N/A")'
# Check wildcard permissions
kubectl get roles,clusterroles -A -o json | jq -r '.items[] | select(.rules[]?.resources[]? == "*") | .metadata.name + " (namespace: " + (.metadata.namespace // "cluster-wide") + ")'

Second Defense: Network Policy Isolation

1. Zero‑Trust Network Model

Core idea: Deny all traffic by default and explicitly allow only required communication.

# Default deny policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

2. Precise Service‑to‑Service Controls

# Database access policy: only API server may talk to MySQL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: mysql-access-policy
spec:
  podSelector:
    matchLabels:
      app: mysql
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: api-server
      namespaceSelector:
        matchLabels:
          name: production
    ports:
    - protocol: TCP
      port: 3306

3. Network Policy Validation Tool

import subprocess, json

def test_network_connectivity():
    """Test whether a network policy is enforced."""
    test_cases = [
        {"from": "frontend-pod", "to": "database-pod", "port": 3306, "expected": "DENY"},
        {"from": "api-pod", "to": "database-pod", "port": 3306, "expected": "ALLOW"},
    ]
    for case in test_cases:
        result = subprocess.run([
            "kubectl", "exec", case["from"], "--", "nc", "-zv", case["to"], str(case["port"])],
            capture_output=True, timeout=10)
        status = "PASS" if (result.returncode == 0) == (case["expected"] == "ALLOW") else "FAIL"
        print(f"🧪 {case['from']} -> {case['to']}:{case['port']} | {status}")

Third Defense: Pod Security Standards

1. PSS Configuration

# Namespace‑level security policy
apiVersion: v1
kind: Namespace
metadata:
  name: production
labels:
  pod-security.kubernetes.io/enforce: restricted
  pod-security.kubernetes.io/audit: restricted
  pod-security.kubernetes.io/warn: restricted

2. Security Context Best Practices

apiVersion: v1
kind: Pod
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 10001
    runAsGroup: 10001
    fsGroup: 10001
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]
        add: ["NET_BIND_SERVICE"]
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}

Security Monitoring & Alerts

1. Real‑Time Event Monitoring (Falco rule)

# Privileged container detection
- rule: PrivilegedContainerSpawned
  desc: Detect privileged container creation
  condition: >
    container and k8s_audit and ka.verb=create and ka.resource.resource=pods and ka.request_object_spec_securitycontext_privileged=true
  output: "Privileged container created (user=%ka.user.name pod=%ka.response_object_metadata_name namespace=%ka.response_object_metadata_namespace)"
  priority: WARNING

2. Security Scoring Dashboard Script

#!/bin/bash
# Kubernetes security score report

echo "📈 Cluster security score report"
echo "========================"

# RBAC score (30 points)
rbac_score=0
cluster_admin_count=$(kubectl get clusterrolebindings -o json | jq '[.items[] | select(.roleRef.name=="cluster-admin")] | length')
if [ "$cluster_admin_count" -lt 3 ]; then
  rbac_score=20
elif [ "$cluster_admin_count" -lt 5 ]; then
  rbac_score=15
else
  rbac_score=5
fi

# NetworkPolicy score (30 points)
ns_with_netpol=$(kubectl get networkpolicies -A --no-headers | wc -l)
total_ns=$(kubectl get ns --no-headers | wc -l)
netpol_coverage=$((ns_with_netpol * 100 / total_ns))
if [ "$netpol_coverage" -gt 80 ]; then
  netpol_score=25
elif [ "$netpol_coverage" -gt 50 ]; then
  netpol_score=15
else
  netpol_score=5
fi

total_score=$((rbac_score + netpol_score))

echo "🏆 Total: $total_score/60"
echo "📋 RBAC security: $rbac_score/30"
echo "🌐 NetworkPolicy: $netpol_score/30"

Automated Hardening

1. Helm Chart Security Template

# values.yaml security template
security:
  podSecurityStandard: "restricted"
  networkPolicies:
    enabled: true
    defaultDeny: true
    allowedIngress:
    - from: "frontend"
      ports: [8080]
    allowedEgress:
    - to: "database"
      ports: [3306]

securityContext:
  runAsNonRoot: true
  readOnlyRootFilesystem: true
  dropAllCapabilities: true

2. CI/CD Security Gate

# .github/workflows/security-check.yml
name: Kubernetes Security Scan
run: |
  # OPA Conftest policy check
  conftest verify --policy security-policies/ k8s-manifests/
  # Trivy config scan
  trivy config k8s-manifests/
  # Validate network policies
  kubectl --dry-run=server apply -f network-policies/

Production Recommendations

1. Layered Defense Strategy

Boundary layer : Ingress + WAF

Network layer : NetworkPolicy + ServiceMesh

Application layer : RBAC + PSS

Data layer : Encryption + Auditing

2. Incremental Hardening Timeline

Week 1 : Implement basic RBAC and clean excessive permissions

Weeks 2‑3 : Deploy network policies and tighten gradually

Week 4 : Enable Pod Security Standards

Ongoing : Monitoring, alerts, and incident response

3. Common Pitfalls to Avoid

❌ Enabling all policies at once can cause service disruption

❌ Ignoring DNS policies may block CoreDNS communication

❌ Overly complex network policies are hard to maintain

Conclusion

Kubernetes security hardening is a systematic engineering effort that must cover permission control, network isolation, and container security; it is a continuous improvement process rather than a one‑time task.

MonitoringKubernetesRBACPodSecurityNetworkPolicySecurityAutomation
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.