Unlock the Full Deployment‑to‑Service Workflow in Kubernetes
This comprehensive guide walks operators through the entire Kubernetes workflow from creating a Deployment to exposing a Service, explaining core resources, control loops, scheduling, networking, rolling updates, troubleshooting steps, best‑practice configurations, performance tuning, and security hardening.
Background and Problem
In day‑to‑day operations teams often see Service‑access failures, Pods that never start, or inter‑Pod communication errors. These symptoms usually stem from a misunderstanding of the core Kubernetes workflow rather than from application bugs. The article explains the complete data path from a Deployment to a Service, the mechanisms of each component, typical failure points, and a systematic troubleshooting methodology.
Kubernetes Core Resource Objects
Workload vs. WorkloadResources
Kubernetes groups objects into two categories:
Workloads – objects that run application containers (e.g., Deployment, StatefulSet, DaemonSet, Job, CronJob).
WorkloadResources – objects that provide supporting capabilities such as networking, storage, and configuration (e.g., Service, ConfigMap, Secret, PersistentVolumeClaim, ServiceAccount).
Deployment Essentials
A Deployment manages a ReplicaSet, which in turn creates Pods. Its main responsibilities are declarative updates, version management, and rolling updates.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27-alpine
ports:
- containerPort: 80
resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: "/"
port: 80
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: "/"
port: 80
initialDelaySeconds: 5
periodSeconds: 5ReplicaSet Control Loop
The ReplicaSet controller follows a classic control loop:
Observer – watches Pod events.
Interpreter – compares observed state with the desired replica count ( spec.replicas).
Analyzer – decides whether to create or delete Pods.
Effector – issues API calls to create or delete Pods.
This loop provides self‑healing: when a Pod crashes, the controller creates a replacement.
Pod Scheduling Process
The scheduling process has two phases:
Control‑plane decision – kube-scheduler selects a node based on NodeAffinity, PodAffinity/AntiAffinity, taints/tolerations, resource requests, and priority.
Node‑side container start – the kubelet on the chosen node pulls the image via the CRI, creates the container’s namespaces, applies resource limits, and starts the main process.
# View scheduler config
kubectl get configmap kube-scheduler-config -n kube-system -o yaml
# Show scheduling events for a Pod
kubectl describe pod POD_NAME | grep -A 10 "Events:"
# Inspect node resource availability
kubectl describe node NODE_NAME | grep -A 5 "Allocated resources"Service Network Abstraction
Four Service types are supported:
ClusterIP – internal virtual IP (default).
NodePort – static port on each node (30000‑32767).
LoadBalancer – provisions a cloud‑provider load balancer.
ExternalName – maps the Service to an external DNS name.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80Each node runs kube-proxy, which implements Service forwarding in one of three modes:
iptables – suitable for small clusters.
IPVS – higher performance for large clusters.
nftables – next‑generation replacement.
# Show kube‑proxy mode
kubectl get configmap kube-proxy-config -n kube-system -o yaml | grep mode
# Inspect iptables rules created by kube‑proxy
sudo iptables -t nat -L KUBE-SERVICES -n -v | head -20
# Inspect IPVS rules
sudo ipvsadm -L -n | grep KUBEComplete Data Flow from Deployment to Service
End‑to‑End Workflow
The path consists of seven steps:
User creates a Deployment via kubectl or the API.
The Deployment controller creates a ReplicaSet.
The ReplicaSet controller creates Pods according to its spec.
The scheduler assigns each Pod to a node.
The node’s kubelet pulls images and starts containers.
User creates a Service that selects the Pods. kube-proxy configures network rules so traffic reaches the Pods.
Demonstration
Create Deployment
# Apply the Deployment manifest
kubectl apply -f deployment.yaml
# Verify status
kubectl get deployment nginx-deployment
kubectl describe deployment nginx-deployment
# List ReplicaSet and Pods
kubectl get replicaset
kubectl get rs -l app=nginx
kubectl get pod -l app=nginx
kubectl get pods -o wide -l app=nginxCreate Service
# Apply the Service manifest
kubectl apply -f service.yaml
# Verify Service and Endpoints
kubectl get svc nginx-service
kubectl describe svc nginx-service
kubectl get endpoints nginx-service
kubectl describe endpoints nginx-serviceKey Configuration Details
Selector Mechanics – Deployment, ReplicaSet, Pod, and Service selectors must all match; otherwise the workflow breaks.
# Deployment selector expects "app: nginx"
selector:
matchLabels:
app: nginx
# Pod template with mismatched label
metadata:
labels:
app: nginx-web
# Service selector that does not match any Pod
selector:
app: nginx-proxyService Port Parameters port – Service’s exposed port. targetPort – backend container port (defaults to port if omitted). nodePort – external port for NodePort Services. protocol – TCP or UDP.
ports:
- name: http
protocol: TCP
port: 80 # Service port
targetPort: 8080 # Container port
nodePort: 30080 # Required for NodePort typeRolling Update and Rollback Mechanisms
Rolling Update Principles
The default RollingUpdate strategy replaces old Pods with new ones while keeping the application available.
Parameters
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%Update Process Example (replicas=3, maxUnavailable=1, maxSurge=1)
Create a new Pod (total Pods = 4).
Wait for the new Pod to become Ready, then delete one old Pod (total = 3).
Repeat until all Pods run the new version.
# View rollout history
kubectl rollout history deployment/nginx-deployment
# Inspect a specific revision
kubectl rollout history deployment/nginx-deployment --revision=2
# Check current rollout status
kubectl rollout status deployment/nginx-deployment
# Show Deployment events
kubectl describe deployment nginx-deployment | grep -A 10 "Events:"Rollback Operations
# Roll back to the previous revision
kubectl rollout undo deployment/nginx-deployment
# Roll back to a specific revision
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# Verify rollback status
kubectl rollout status deployment/nginx-deployment
kubectl get pods -l app=nginxRollback creates a new ReplicaSet that mirrors the selected revision’s configuration.
Pause and Resume Updates
# Pause the rollout
kubectl rollout pause deployment/nginx-deployment
# Apply a partial update (e.g., change image)
kubectl set image deployment/nginx-deployment nginx=nginx:1.28-alpine
# Resume the rollout
kubectl rollout resume deployment/nginx-deploymentNetwork Communication Mechanisms
Container‑to‑Container Communication
Containers on the same node share a network namespace provided by the Pause container. The CNI plugin creates a bridge (e.g., cni0) that connects all Pods on that node.
# Show bridge interfaces
ip link show type bridge
# Inspect the cni0 bridge
ip addr show cni0
# View routing table
ip routeInter‑Node Pod Communication
Overlay networks (Flannel, Calico, Cilium) enable Pods on different nodes to communicate.
Flannel – VXLAN encapsulation over UDP.
Calico – BGP‑based direct routing.
Cilium – eBPF‑based fine‑grained control.
Service Access Path
Pod resolves the Service name to a ClusterIP via CoreDNS.
Pod sends traffic to the ClusterIP. kube-proxy intercepts the request and rewrites the destination to a backend Pod IP using iptables/IPVS rules.
Traffic reaches the selected Pod.
# Open a debugging Pod
kubectl run -it --rm debug --image=busybox:1.36 -- sh
# Test DNS resolution
nslookup kubernetes.default
nslookup nginx-service.default.svc.cluster.local
# Test Service access
wget -qO- http://nginx-service:80Common Failures and Troubleshooting
Pods Fail to Start
Image Pull Errors
# Check Pod status
kubectl get pod POD_NAME -o wide
# View detailed events
kubectl describe pod POD_NAME | grep -A 20 "Events:"
# Typical error messages
# ErrImagePull – image cannot be pulled
# ImagePullBackOff – repeated failures
# InvalidImageName – malformed image nameResolution: verify the image exists, pull it manually, or create an imagePullSecrets secret for private registries.
Insufficient Resources
# Inspect node resources
kubectl describe node NODE_NAME
# Show allocated resources
kubectl describe node NODE_NAME | grep -A 5 "Allocated resources"
# Examine Pod resource requests
kubectl get pod POD_NAME -o jsonpath='{.spec.containers[*].resources}'Scheduling Failures
# Look for scheduling errors in events
kubectl describe pod POD_NAME | grep -A 5 "Events:"
# Common messages
# FailedScheduling – no suitable node
# Example: 0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready:...}
# Example: 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selectorService Inaccessibility
Empty Endpoints
# Show Service and Endpoints
kubectl get svc nginx-service
kubectl get endpoints nginx-service
# Verify selector matches Pods
kubectl get pods -l app=nginx
kubectl describe svc nginx-service | grep SelectorTypical cause: mismatched selectors between Deployment and Service.
kube‑proxy Issues
# View kube‑proxy logs
kubectl logs -n kube-system -l k8s-app=kube-proxy
# Inspect kube‑proxy ConfigMap
kubectl get configmap kube-proxy-config -n kube-system -o yaml
# Restart kube‑proxy if necessary (use with caution)
kubectl rollout restart daemonset kube-proxy -n kube-systemDNS Problems
# Verify CoreDNS pods are running
kubectl get pod -n kube-system -l k8s-app=kube-dns
# Test DNS from a Pod
kubectl exec -it POD_NAME -- nslookup nginx-service
# Check /etc/resolv.conf inside the Pod
kubectl exec -it POD_NAME -- cat /etc/resolv.confNetwork Connectivity Issues
Cross‑Node Communication Failures
# Ping target Pod IP
ping TARGET_POD_IP
# Trace route to target Pod IP
traceroute TARGET_POD_IP
# Verify CNI plugin status
ip link show | grep cni
cat /etc/cni/net.d/10-flannel.conflistNetworkPolicy Restrictions
# List NetworkPolicies
kubectl get networkpolicy -o yaml
# Example policy allowing traffic from the "frontend" namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 80Rolling Update Problems
Update Stalls
# Check Deployment status
kubectl get deployment nginx-deployment
# Check ReplicaSet status
kubectl get rs -l app=nginx
# Find non‑Running Pods
kubectl get pod -l app=nginx | grep -v Running
# Force a rollback if needed
kubectl rollout undo deployment/nginx-deploymentPods Not Ready
# Inspect probe configuration
kubectl describe pod POD_NAME | grep -A 10 "Conditions:"
# Test health endpoint from inside the Pod
kubectl exec -it POD_NAME -- curl -k http://localhost:80/healthz
# Look for probe failure reasons
kubectl describe pod POD_NAME | grep -A 5 "Liveness" | grep "Reason:"Best Practices
Resource Quotas and Limits
spec:
containers:
- name: nginx
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"Set requests to average usage so the scheduler can place Pods correctly.
Set limits to peak usage to prevent a single container from exhausting node resources.
Keep memory limits close to requests to avoid fragmentation.
Health Check Configuration
spec:
containers:
- name: nginx
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 15
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 1Adjust initialDelaySeconds based on the application’s start‑up time to avoid premature restarts.
Label Management Strategy
metadata:
labels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: nginx-prod
app.kubernetes.io/version: "1.27"
app.kubernetes.io/component: webserver
app.kubernetes.io/part-of: frontend
app.kubernetes.io/managed-by: kubectlStandardized labels enable environment isolation, version tracking, and team ownership.
Deployment Strategies
Canary – gradually shift traffic to the new version.
Blue‑Green – maintain two complete environments and switch instantly.
RollingUpdate – default for stateless services.
# Canary example (zero unavailable, 10% surge)
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 10%Logging and Monitoring
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-logging
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
template:
spec:
containers:
- name: fluentd
image: fluentd:v1.16
volumeMounts:
- name: varlog
mountPath: /var/log
- name: containers
mountPath: /var/lib/docker/containers
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containersPerformance and Scalability
Scheduler Performance Tuning
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
percentageOfNodesToScore: 50 # Scan only half of the nodes
podInitialBackoffSeconds: 10
podMaxBackoffSeconds: 20kube‑proxy Performance
# Check current mode
kubectl get configmap kube-proxy-config -n kube-system -o yaml | grep mode
# Switch to IPVS mode (edit the ConfigMap)
kubectl edit configmap kube-proxy-config -n kube-systemHorizontal Pod Autoscaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80Security Configuration
RBAC Least‑Privilege
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-binding
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.ioPod Security Standards
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restrictedConclusion
The Deployment‑to‑Service workflow is the backbone of Kubernetes operations. Mastering declarative updates, the ReplicaSet control loop, Pod scheduling, and Service networking provides a solid foundation for troubleshooting everyday issues. Systematic, step‑by‑step verification—starting from resource expectations, moving through scheduling, container start‑up, and finally network connectivity—combined with best‑practice configurations for resources, health checks, labeling, and security dramatically reduces failure rates in production environments.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
