How to Precisely Recover a Single Kubernetes Resource from an etcd Snapshot in 5 Steps
This guide explains how to extract and restore a specific Kubernetes resource from an etcd snapshot using a lightweight, step‑by‑step process that avoids full‑cluster recovery, minimizes downtime, and works with tools like etcdctl, auger, and kubectl.
Introduction
etcd stores the state of every Kubernetes object. When a critical resource such as a ConfigMap, Secret, or Deployment is accidentally deleted, a full cluster restore is often overkill. This article shows how to surgically recover only the missing resource from an etcd snapshot, reducing downtime and impact.
Prerequisites
etcd v3.4+ (binary available from the official releases)
etcdctl – command‑line client for etcd
auger – tool to decode etcd binary payloads into YAML
kubectl – Kubernetes command‑line tool
A snapshot file, e.g., live-cluster-snapshot.db Work in a temporary environment and create a clean snapshot first:
etcdctl snapshot save live-cluster-snapshot.dbStep‑by‑Step Recovery Process
Step 1: Prepare the Snapshot
If the snapshot is compressed, decompress it and then restore it to a separate data directory.
gunzip live-cluster-snapshot.db.gz etcdctl snapshot restore live-cluster-snapshot.db --data-dir=recovery-etcdStep 2: Start a Local etcd Instance
etcd --data-dir=recovery-etcd --listen-client-urls=http://localhost:2379Verify the instance is running:
etcdctl --endpoints=localhost:2379 endpoint statusStep 3: Locate and Extract the Resource
etcd stores ConfigMaps under keys like /registry/configmaps/<namespace>/<name>. List keys for the production namespace:
etcdctl --endpoints=localhost:2379 get --prefix "/registry/configmaps/production" --keys-onlyExtract and decode the ConfigMap:
etcdctl --endpoints=localhost:2379 get /registry/configmaps/production/app-config --print-value-only | auger decode > app-config.yamlExample app-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
api-url: "https://api.example.com"
log-level: "debug"Step 4: Apply to the Live Cluster
Dry‑run the apply to ensure it succeeds:
kubectl apply -f app-config.yaml --dry-run=serverIf the dry‑run passes, apply for real: kubectl apply -f app-config.yaml Expected output:
configmap/app-config createdStep 5: Clean Up
pkill etcd
rm -rf recovery-etcd app-config.yamlQuick Reference for etcd Paths
ConfigMaps – /registry/configmaps/<namespace>/<name> Secrets – /registry/secrets/<namespace>/<name> Deployments – /registry/deployments/<namespace>/<name> Pods – /registry/pods/<namespace>/<name> ServiceAccounts – /registry/serviceaccounts/<namespace>/<name> CRDs –
/registry/<group>/<resource>/<namespace>/<name>Advanced Scenarios
Cross‑Namespace Recovery
cat app-config.yaml | yq eval '.metadata.namespace = "dev"' | kubectl apply -f -Encrypted Clusters (KMS)
Configure decryption keys with etcdctl as described in the etcd encryption guide.
Bulk Recovery
etcdctl --endpoints=localhost:2379 get --prefix "/registry/configmaps/production" --print-value-only | auger decode > all-cm.yamlTroubleshooting Tips
etcd not starting – ensure no other etcd process is running.
Connection refused – verify etcd is listening on localhost:2379.
YAML not applied – check the manifest schema and resource references.
Apply conflicts – try kubectl replace or delete then apply.
Conclusion
Precise recovery of individual resources from an etcd snapshot lets Kubernetes administrators reduce downtime, avoid collateral damage, and respond to incidents with confidence. By following the five‑step workflow, you can restore a lost ConfigMap (or other object) in minutes without a full‑cluster restore.
References
etcd releases: https://github.com/etcd-io/etcd/releases/tag/v3.4.34
etcd encryption guide: https://etcd.io/docs/v3.4/op-guide/security/
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
