Master Elasticsearch Snapshots and Security: Step‑by‑Step NFS Backup Guide
This guide walks you through configuring Elasticsearch snapshot backups using NFS, installing and using elasticdump for data export, securing the cluster with SSL certificates, setting up user authentication, and managing Kibana access, providing complete commands, configuration snippets, and visual diagrams for each step.
1. Official Snapshot Backup and Restore
Configure Elasticsearch nodes to share a common NFS directory for snapshot storage.
Environment Requirements
All Elasticsearch nodes must mount the same shared NFS directory.
<code>yum install nfs-utils -y
# create elasticsearch user and group
groupadd elasticsearch -g 996
useradd elasticsearch -g 996 -u 998 -M -s /sbin/nologin
cat > /etc/exports <<'EOF'
/es-nfs-data 10.0.0.0/24(rw,sync,all_squash,anonuid=998,anongid=996)
EOF
systemctl restart nfs
showmount -e 10.0.0.122</code>Install NFS Client on ES Nodes
<code>cat > nfs-client.sh <<'EOF'
yum install nfs-utils -y
mkdir -p /es-client-data
mount -t nfs 10.0.0.122:/es-nfs-data /es-client-data
EOF
sh nfs-client.sh
# verify mount
df -h | grep es-client-data</code>Enable Snapshot on ES Nodes
<code># add to each node's elasticsearch.yml
path.repo: /es-client-data/
cluster.name: yuchao_es
node.name: es-node3
path.data: /var/lib/elasticsearch/
path.logs: /var/log/elasticsearch/
bootstrap.memory_lock: true
network.host: 127.0.0.1,10.0.0.20
http.port: 9200
discovery.seed_hosts: ["10.0.0.18","10.0.0.19","10.0.0.20"]
cluster.initial_master_nodes: ["10.0.0.18"]
</code>Verify Restart
Restart the Elasticsearch service and ensure it starts without errors.
Register Snapshot Repository
<code>PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {"location": "/es-client-data/my_backup_location", "compress": true}
}
GET /_snapshot/my_backup</code>Create Snapshots
<code># Full snapshot
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
# Snapshot of specific indices
PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
{
"indices": "t1,t2",
"ignore_unavailable": true,
"include_global_state": false
}</code>View Snapshot Information
<code>GET /_snapshot
GET /_snapshot/my_backup/
GET /_snapshot/my_backup/snapshot_1
GET /_snapshot/my_backup/snapshot_2</code>Restore Data
<code># Delete index t2 then restore it
POST /_snapshot/my_backup/snapshot_2/_restore
{
"indices": "t2",
"ignore_unavailable": true,
"include_global_state": false,
"rename_pattern": "t(.+)",
"rename_replacement": "restored_index_$1",
"include_aliases": false
}</code>Snapshot Naming with Dates (Not Recommended)
<code>PUT /_snapshot/my_backup/<snapshot-{now/d}>
</code>2. Third‑Party Backup Tools (elasticdump)
Install Node.js
<code>wget https://nodejs.org/dist/v10.16.3/node-v10.16.3-linux-x64.tar.xz
tar -xf node-v10.16.3-linux-x64.tar.xz
ln -s node-v10.16.3-linux-x64/ node
export PATH=/opt/node/bin:$PATH
npm install elasticdump -g
elasticdump --version</code>Backup Commands
<code># Export index t1 to JSON
elasticdump \
--input=http://10.0.0.18:9200/t1 \
--output=/es-nfs-data/t1.json \
--type=data
# Export and compress
elasticdump \
--input=http://10.0.0.18:9200/t2 \
--output=- | gzip > /es-nfs-data/t2.json.gz</code>Restore Commands
<code># Import JSON back into Elasticsearch
elasticdump \
--input=/es-nfs-data/t2.json \
--output=http://10.0.0.18:9200/t2 \
--type=data</code>Batch Backup Script
<code>#!/bin/bash
indexs=$(curl -s 10.0.0.18:9200/_cat/indices | awk '{print $3}' | grep -v '^\.')
for i in $indexs; do
elasticdump \
--input=http://10.0.0.18:9200/$i \
--output=/es-nfs-data/$i.json \
--type=data
done</code>Password‑Protected Elasticsearch
<code>elasticdump \
--input=http://user:[email protected]:9200/t2 \
--output=/es-nfs-data/t2.json \
--type=data</code>3. Elasticsearch Security Configuration
Create Certificates
<code># Generate a CA
/usr/share/elasticsearch/bin/elasticsearch-certutil ca
# Generate node certificates using the CA
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
</code>Distribute Certificates
<code>scp -r /etc/elasticsearch/certs [email protected]:/etc/elasticsearch/
scp -r /etc/elasticsearch/certs [email protected]:/etc/elasticsearch/</code>Enable Security in elasticsearch.yml
<code>xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-stack-ca.p12</code>Set Built‑in User Passwords
<code>/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive</code>Configure Kibana to Use the New User
<code>elasticsearch.username: "kibana_system"
elasticsearch.password: "123123"
</code>Create Kibana Space and Role for Limited Access
Define a space (e.g., "dev") and a role that only allows read access to index
t2, then assign the role to a new user.
<code># Create role
PUT /_security/role/dev_role
{
"indices": [{"names": ["t2"], "privileges": ["read"]}]
}
# Create user and assign role
POST /_security/user/dev_user
{
"password": "devpass",
"roles": ["dev_role"]
}
</code>Result
The user can log into Kibana, see only the designated space, and access only the allowed index.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.