Cloud Native 4 min read

Step-by-Step Guide to Configure JMX Exporter Monitoring for a Java Service on Kubernetes with Prometheus

This tutorial walks through creating a config.yaml, building a Docker image that bundles the JMX exporter, deploying the service on Kubernetes, defining a ServiceMonitor for Prometheus, and visualizing JVM metrics in Grafana, providing a complete end‑to‑end monitoring solution.

Practical DevOps Architecture
Practical DevOps Architecture
Practical DevOps Architecture
Step-by-Step Guide to Configure JMX Exporter Monitoring for a Java Service on Kubernetes with Prometheus

This article explains how to expose JVM metrics of a Java service using the JMX exporter and collect them with Prometheus in a Kubernetes (Kubesphere) environment.

Step 1: Create config.yaml

The configuration file defines the exporter port and metric collection rules, for example:

hostPort: localhost:38080 # define exporter port
username:
password:
rules:
- pattern: ".*"

Step 2: Build a Docker image

Prepare a Dockerfile that copies the compiled JAR, the config.yaml , and the jmx_prometheus_agent.jar into the image, sets the working directory, and defines the entrypoint with the necessary Java options and the java‑agent pointing to the exporter.

FROM   harbor.local.com/public/tincere:jdk8-msyh
VOLUME ["/opt"]
WORKDIR /opt
ADD ./target/*.jar    /opt/myservice.jar
ADD config.yaml      /opt/config.yaml
ADD jmx_prometheus_agent.jar /opt/jmx_prometheus_agent.jar
ENTRYPOINT ["java","-Xms512M","-Xmx512M","-Xmn384M","-Xss1M","-XX:MetaspaceSize=256M","-XX:MaxMetaspaceSize=256M","-XX:+UseParNewGC","-XX:+UseConcMarkSweepGC","-XX:CMSInitiatingOccupancyFraction=92","-XX:+UseCMSCompactAtFullCollection","-XX:CMSFullGCsBeforeCompaction=0","-XX:+CMSParallelInitialMarkEnabled","-XX:+CMSScavengeBeforeRemark","-XX:+HeapDumpOnOutOfMemoryError","-XX:HeapDumpPath=/opt","-XX:+PrintGCDetails","-XX:+PrintGCDateStamps","-Xloggc:gc.log","-Dspring.profiles.active=dev","-javaagent:/opt/jmx_prometheus_agent.jar=38080:config.yaml","-jar","/opt/myservice.jar"]

Step 3: Verify metric exposure

After the service starts, you can exec into the pod and use curl to retrieve the JVM metrics endpoint.

Step 4: Create Deployment and Service

Define a Kubernetes Deployment for the Java service and expose port 38080 as jvm-port in the corresponding Service.

Step 5: Define a ServiceMonitor

Create a ServiceMonitor resource in the same namespace as Prometheus so that Prometheus can scrape the metrics:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: jvm-test
  name: jvm-test
  namespace: kubesphere-monitoring-system
spec:
  endpoints:
  - path: /metrics
    port: jvm-port
    scheme: http
  namespaceSelector:
    any: true
  selector:
    matchLabels:
      app: jvm-test

Step 6: Adjust Prometheus selector

Modify the Prometheus resource in the kubesphere-monitoring-system namespace to include the label selector for the newly created ServiceMonitor.

Step 7: Verify Prometheus discovery

Prometheus will automatically discover the ServiceMonitor and reload its configuration. You can port‑forward to the Prometheus UI to confirm:

kubectl -n kubesphere-monitoring-system port-forward --address=0.0.0.0 prometheus-k8s-0 39090:9090

When the UI shows the expected metrics screens (as illustrated by the screenshots), the monitoring setup is successful.

Step 8: Visualize in Grafana

Import a suitable JVM dashboard into Grafana, configure the data source to point to Prometheus, and you will see real‑time JVM performance graphs.

monitoringDockerKubernetesPrometheusGrafanaJMX Exporter
Practical DevOps Architecture
Written by

Practical DevOps Architecture

Hands‑on DevOps operations using Docker, K8s, Jenkins, and Ansible—empowering ops professionals to grow together through sharing, discussion, knowledge consolidation, and continuous improvement.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.