Operations 12 min read

Deploying the EFK Stack with Local‑Volume StorageClass on OpenShift

This guide explains how to prepare resources, create a local‑volume storage class, install the Elasticsearch and Cluster Logging operators, and configure a persistent EFK stack on OpenShift, including YAML definitions, command‑line steps, and best‑practice notes on node selectors and tolerations.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Deploying the EFK Stack with Local‑Volume StorageClass on OpenShift

Overview – Deploying an EFK (Elasticsearch, Fluentd, Kibana) stack in production requires sufficient memory, persistent storage, and fixed Elasticsearch nodes. Because NFS/NAS storage is not suitable for Elasticsearch, a local‑volume storage class is recommended for performance and reliability.

Deploy Local‑Volume StorageClass

Create a project for local storage:

oc new-project local-storage

Install the Local Storage Operator via the OpenShift console (OperatorHub → Local Storage Operator → Install → select the local‑storage namespace → Subscribe).

Verify the operator pod is running:

# oc -n local-storage get pods
NAME                                      READY   STATUS    RESTARTS   AGE
local-storage-operator-7cd4799b4b-6bzg4   1/1     Running   0          12h

Add a disk (e.g., /dev/sdb 50 GB) to three Elasticsearch nodes and create localvolume.yaml :

apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks"
  namespace: "local-storage"
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker02.ocp44.cluster1.com
          - worker03.ocp44.cluster1.com
          - worker04.ocp44.cluster1.com
  storageClassDevices:
    - storageClassName: "local-sc"
      volumeMode: Filesystem
      fsType: xfs
      devicePaths:
        - /dev/sdb

Create the LocalVolume resource:

oc create -f localvolume.yaml

Check the created pods, PersistentVolumes and StorageClass:

# oc get all -n local-storage
... (output omitted for brevity) ...
# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-2337578c   50Gi       RWO            Delete           Available           local-sc                4m42s
...
# oc get sc
NAME       PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-sc   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  11h

Deploy Elasticsearch Operator

In the OpenShift console navigate to Operators → OperatorHub → Elasticsearch Operator → Install, choose "All namespaces" installation mode, set the installed namespace to openshift-operators-redhat , enable recommended monitoring, select an update channel, and subscribe. Verify the operator status is Succeeded on the Installed Operators page.

Deploy Cluster Logging Operator

Similarly, install the Cluster Logging Operator (OperatorHub → Cluster Logging Operators) into the openshift-logging namespace, enable monitoring, choose an update channel, and confirm the operator is running before checking pod status under Workloads.

Install EFK (ClusterLogging Custom Resource)

Create a ClusterLogging CR with the following YAML, which defines a three‑node Elasticsearch cluster using the local-sc storage class, resource limits, a single Kibana replica, and a daily Curator cleanup schedule:

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 3
      storage:
        storageClassName: local-sc
        size: 48G
      resources:
        limits:
          cpu: "4"
          memory: "16Gi"
        requests:
          cpu: "4"
          memory: "16Gi"
      redundancyPolicy: "SingleRedundancy"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "30 3 * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

Key points: set node count, storage class name, storage size, and generous memory limits; use a single replica for each index shard; schedule Curator to clean up data older than 30 days (adjustable).

Additional Note 1 – EFK Fixed Nodes

EFK can be bound to specific nodes via nodeSelector or taint/tolerations . Using the local‑volume operator already binds Elasticsearch to the chosen nodes, avoiding extra node selectors. Be cautious when adding taints, as they may evict essential infra pods (e.g., DNS, machine‑config‑daemon) that lack matching tolerations.

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      tolerations:
      - key: "logging"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 6000
      resources:
        limits:
          memory: 8Gi
        requests:
          cpu: 100m
          memory: 1Gi
      storage: {}
      redundancyPolicy: "ZeroRedundancy"
  visualization:
    type: "kibana"
    kibana:
      tolerations:
      - key: "logging"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 6000
      resources:
        limits:
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 1Gi
      replicas: 1
  curation:
    type: "curator"
    curator:
      tolerations:
      - key: "logging"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 6000
      resources:
        limits:
          memory: 200Mi
        requests:
          cpu: 100m
          memory: 100Mi
      schedule: "*/5 * * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd:
        tolerations:
        - key: "logging"
          operator: "Exists"
          effect: "NoExecute"
          tolerationSeconds: 6000
        resources:
          limits:
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi

Additional Note 2 – Node Selector for Projects

After reserving nodes for Elasticsearch, label regular application nodes with app and inject a nodeSelector into the project template so new projects automatically schedule onto non‑ES nodes, avoiding manual selector configuration in each deployment.

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
  collection:
    logs:
      fluentd:
        resources: null
      type: fluentd
  curation:
    curator:
      nodeSelector:
        node-role.kubernetes.io/infra: ''
      resources: null
      schedule: 30 3 * * *
    type: curator
  logStore:
    elasticsearch:
      nodeCount: 3
      nodeSelector:
        node-role.kubernetes.io/infra: ''
      redundancyPolicy: SingleRedundancy
      resources:
        limits:
          cpu: 500m
          memory: 16Gi
        requests:
          cpu: 500m
          memory: 16Gi
      storage: {}
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      nodeSelector:
        node-role.kubernetes.io/infra: ''
      proxy:
        resources: null
      replicas: 1
      resources: null
    type: kibana

Reference Links

OpenShift Logging Tolerations

Moving Cluster Logging Nodes

Configuring Project Creation

Default NetworkPolicy for New Projects

Red Hat Solution 4946861

ElasticsearchKubernetesLoggingOperatorsOpenShiftEFKLocal Volume
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.