Configuring Prometheus Operator ServiceMonitor on OpenShift after Migrating from Mesos+Marathon
This article explains how to migrate a Mesos+Marathon environment to OpenShift and configure Prometheus Operator ServiceMonitor resources, including service creation, ServiceMonitor definition, and verification steps, with full YAML examples and screenshots of the monitoring UI.
Overview
Recently a project migrated a customer from a Mesos+Marathon platform to OpenShift because the former had limited support, high operational cost, and scarce documentation, while OpenShift offers enterprise‑grade support, an active community, and better compatibility with modern cloud‑native tools.
Requirements
The customer originally used Prometheus on Mesos+Marathon, exposing a metrics endpoint. After moving to OpenShift the built‑in Prometheus Operator is used, which requires different configuration to scrape the existing application.
Prometheus (Operator) Overview
The Prometheus Operator provides four custom resources: Prometheus (server instance), ServiceMonitor (monitoring configuration), PrometheusRule (alerting rules), and Alertmanager (alert management). These resources enable declarative creation and management of monitoring components.
Configuring ServiceMonitor
To add a new monitoring target, create a Service first, then a ServiceMonitor that references it. The Service definition must set spec.ports.name correctly.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-29T05:14:53Z
labels:
app: xxx-product-service
prometheus: k8s
name: xxx-product-service
namespace: xxx-poc
resourceVersion: "13541805"
selfLink: /api/v1/namespaces/xxx-poc/services/xxx-product-service
uid: ef06651a-ca1b-11e9-9a49-005056af6df7
spec:
clusterIP: 172.30.80.126
ports:
- name: xxx-product
port: 10002
protocol: TCP
targetPort: 10002
selector:
deploymentconfig: xxx-product-service
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}Next, create the ServiceMonitor with the required fields.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: prometheus
name: xxx-product-service
spec:
endpoints:
- interval: 10s
port: xxx-product
scheme: http
path: /prometheus
namespaceSelector:
matchNames:
- xxx-poc
selector:
matchLabels:
prometheus: k8sViewing ServiceMonitor
Use OpenShift CLI to list CRDs and ServiceMonitors, then retrieve the YAML of the newly created ServiceMonitor.
$ oc get crd
NAME CREATED AT
alertmanagers.monitoring.coreos.com 2019-07-29T07:47:35Z
prometheuses.monitoring.coreos.com 2019-07-29T07:47:35Z
prometheusrules.monitoring.coreos.com 2019-07-29T07:47:35Z
servicemonitors.monitoring.coreos.com 2019-07-29T07:47:35Z
$ oc get servicemonitors.monitoring.coreos.com
NAME AGE
... (list omitted) ...
$ oc get servicemonitors xxx-product-service -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
creationTimestamp: 2019-08-29T06:52:52Z
generation: 1
labels:
k8s-app: prometheus
name: xxx-product-service
namespace: openshift-monitoring
resourceVersion: "13566487"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/servicemonitors/xxx-product-service
uid: 9f2c4bcf-ca29-11e9-9a49-005056af6df7
spec:
endpoints:
- interval: 10s
path: /prometheus
port: xxx-product
scheme: http
namespaceSelector:
matchNames:
- xxx-poc
selector:
matchLabels:
prometheus: k8sViewing Prometheus New Configuration
The Prometheus configuration now includes the ServiceMonitor entry; consult the Prometheus Operator documentation for details on the added sections.
Screenshots
Reference Links
https://yunlzheng.gitbook.io/prometheus-book/
DevOps Cloud Academy
Exploring industry DevOps practices and technical expertise.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.