Cloud Native 16 min read

How to Deploy NFS, ECK, Elasticsearch, Kibana, and Filebeat on Kubernetes with Helm

This guide walks through installing NFS, configuring a dynamic storage class via Helm, deploying the Elastic Cloud on Kubernetes (ECK) operator, setting up Elasticsearch and Kibana clusters, and installing Filebeat for log collection, including system tuning and Kubernetes manifests for a production‑ready environment.

Raymond Ops
Raymond Ops
Raymond Ops
How to Deploy NFS, ECK, Elasticsearch, Kibana, and Filebeat on Kubernetes with Helm

1. Deploy NFS

Install the NFS utilities on all nodes, create a shared directory on the master node, configure

/etc/exports

to export the directory, and enable and start

rpcbind

and

nfs

services.

<code># All nodes install
yum install -y nfs-utils

# Create shared directory on master
mkdir -pv /data/kubernetes

# Export configuration
cat > /etc/exports <<'EOF'
/data/kubernetes *(rw,no_root_squash)
EOF

# Enable and start NFS
systemctl enable --now rpcbind nfs
</code>

2. Deploy NFS dynamic storage with Helm

Create a dedicated namespace, pull the

nfs-subdir-external-provisioner

chart, customize

values.yaml

(e.g., server address, storage class name, default class), and install the chart.

<code># Create namespace
kubectl create ns nfs-sc-default

# Pull chart
helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner

# Install with custom values
helm install nfs-subdir-external-provisioner /root/nfs/nfs-subdir-external-provisioner -f values.yaml -n nfs-sc-default
</code>

Verify the storage class:

<code># List storage classes
kubectl get sc
</code>

2. ECK Overview

ECK (Elastic Cloud on Kubernetes) is a Kubernetes operator that manages the Elastic Stack components (Elasticsearch, Kibana, APM, Beats). It relies on Custom Resource Definitions (CRDs), controllers, and operators. The operator simplifies deployment, scaling, and lifecycle management of Elasticsearch clusters.

Key features

Rapid deployment and monitoring of multiple clusters

Easy scaling of cluster size and storage

Rolling upgrades

TLS security

Hot‑warm‑cold architecture with zone awareness

Supported versions (example: ECK 2.7)

Kubernetes 1.22‑1.26

OpenShift 4.8‑4.12

GKE, AKS, EKS

Helm 3.2.0+

Elasticsearch 6.8+, 7.1+, 8+

Kibana 7.7+, 8+

Beats 7.0+, 8+

3. Cluster Deployment and Planning

3.1 Component versions

OS: CentOS 7.9, Kernel 5.4.260, Kubernetes v1.23.17, Docker 20.10.9, kube‑vip 0.6.0, ECK 2.7.0, ELK 8.9.1.

3.2 System preparation

Increase file descriptor limits and set

vm.max_map_count

to 262144 (temporary or permanent).

<code># Increase ulimit
vim /etc/profile
ulimit -n 65535
source /etc/profile

# Update limits.conf
vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535

# Set vm.max_map_count
sysctl -w vm.max_map_count=262144
# or persist in /etc/sysctl.conf
cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=262144
EOF
sysctl -p
</code>

3.3 Deploy ECK

Install CRDs and the operator:

<code># Install CRDs
wget https://download.elastic.co/downloads/eck/2.7.0/crds.yaml
kubectl create -f crds.yaml

# Install operator
wget https://download.elastic.co/downloads/eck/2.7.0/operator.yaml
kubectl apply -f operator.yaml
</code>

3.3.1 Deploy Elasticsearch

Create

elasticsearch.yaml

with the desired version and node settings, then apply it.

<code>cat <<EOF > elasticsearch.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.9.1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: master
    count: 3
    config:
      node.store.allow_mmap: false
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh','-c','sysctl -w vm.max_map_count=262144']
EOF

kubectl apply -f elasticsearch.yaml
</code>

Check health and retrieve the service endpoint:

<code># Get Elasticsearch status
kubectl get elasticsearch

# Get HTTP service
kubectl get service elasticsearch-es-http
</code>

Obtain the default

elastic

user password from the secret and test the cluster with

curl

or a port‑forward.

3.3.2 Deploy Kibana

Create

kibana.yaml

referencing the Elasticsearch cluster and apply it.

<code>cat <<EOF > kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.9.1
  count: 1
  elasticsearchRef:
    name: elasticsearch
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: I18N_LOCALE
          value: "zh-CN"
EOF

kubectl apply -f kibana.yaml
</code>

Expose Kibana via the generated

kibana-kb-http

service (ClusterIP or NodePort) and access it through a port‑forward.

3.3.3 Deploy Filebeat

Define a ConfigMap with the Filebeat configuration, then create a DaemonSet, ServiceAccount, Role, RoleBinding, and ClusterRole to collect container logs and ship them to Elasticsearch.

<code># ConfigMap
cat <<EOF > filebeat-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: default
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
    processors:
      - add_cloud_metadata: {}
      - add_host_metadata: {}
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
EOF

# DaemonSet (simplified)
cat <<EOF > filebeat-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: default
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.15.2
        args: ["-c","/etc/filebeat.yml","-e"]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: <password>
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
      volumes:
      - name: config
        configMap:
          name: filebeat-config
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
EOF

kubectl apply -f filebeat-config.yaml
kubectl apply -f filebeat-daemonset.yaml
</code>

After deployment, monitor the pods and verify that logs are indexed in Elasticsearch.

ElasticsearchKubernetesNFSKibanaHelmFilebeatECK
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.