Monitoring with Prometheus and Grafana
Prometheus is an open-source monitoring and alerting system. When used along with Grafana, you can create a dynamic dashboard for monitoring ingress into our Kubernetes cluster.
Deployment
This guide will focus on deploying Prometheus and Grafana alongside Emissary in Kubernetes using the Prometheus Operator.
Note: Both Prometheus and Grafana can be deployed as standalone applications outside of Kubernetes. This process is well-documented within the website and docs within their respective projects.
Emissary
Emissary makes it easy to output Envoy-generated statistics to Prometheus. For the remainder of this guide, it is assumed that you have installed and configured Emissary into your Kubernetes cluster, and that it is possible for you to modify the global configuration of the Emissary deployment.
Starting with Emissary 0.71.0
, Prometheus can scrape stats/metrics
directly from Envoy’s /metrics
endpoint, removing the need to
configure Emissary to output stats to
StatsD.
The /metrics
endpoint can be accessed internally via the Emissary admin port (default 8877):
http(s)://ambassador:8877/metrics
or externally by creating a Mapping
similar to below:
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: metrics
spec:
prefix: /metrics
rewrite: ""
service: localhost:8877
Note: Since /metrics
in an endpoint on Emissary
itself, the service
field can just reference the admin port on
localhost.
Using the cluster_tag
setting
The metrics that Prometheus scrapes from Emissary are keyed using
the name of the Envoy
cluster
that is handling traffic for a given Mapping
. The name of a given
cluster
is generated by Emissary and, as such, is not necessarily
terribly readable.
You can set the cluster_tag
attribute within a
Mapping
to specify a prefix for the
generated cluster
name, to help manage metrics.
Prometheus Operator with standard YAML
In this section, we will deploy the Prometheus Operator using the standard YAML files. Alternatively, you can install it with Helm if you prefer.
-
Deploy the Prometheus Operator
To deploy the Prometheus Operator, you can clone the repository and follow the instructions in the README, or simply create the resources published in the YAML with
kubectl
.kubectl create -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
Note: The YAML assumes Kubernetes 1.16 and above. If running a lower version, you will need to run the following command to install the CRDs with the right API version:
curl -sL https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml \ | sed 's|apiVersion: apiextensions.k8s.io/v1|apiVersion: apiextensions.k8s.io/v1beta1|' \ | sed 's|jsonPath|JSONPath|' \ | kubectl apply -f -
-
Deploy Prometheus by creating a
Prometheus
CRDFirst, create RBAC resources for your Prometheus instance
kubectl apply -f https://app.getambassador.io/yaml/ambassador-docs/$version$/monitoring/prometheus-rbac.yaml
Then, copy the YAML below, and save it in a file called
prometheus.yaml
--- apiVersion: v1 kind: Service metadata: name: prometheus spec: type: ClusterIP ports: - name: web port: 9090 protocol: TCP targetPort: 9090 selector: prometheus: prometheus --- apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: prometheus spec: ruleSelector: matchLabels: app: prometheus-operator serviceAccountName: prometheus serviceMonitorSelector: matchLabels: app: ambassador resources: requests: memory: 400Mi
kubectl apply -f prometheus.yaml
-
Create a
ServiceMonitor
Finally, we need to tell Prometheus where to scrape metrics from. The Prometheus Operator easily manages this using a
ServiceMonitor
CRD. To tell Prometheus to scrape metrics from Emissary’s/metrics
endpoint, copy the following YAML to a file calledambassador-monitor.yaml
, and apply it withkubectl
.If you are running an Emissary version higher than 0.71.0 and want to scrape metrics directly from the
/metrics
endpoint of Emissary running in theambassador
namespace:--- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ambassador-monitor labels: app: ambassador spec: namespaceSelector: matchNames: - ambassador selector: matchLabels: service: ambassador-admin endpoints: - port: ambassador-admin
Prometheus is now configured to gather metrics from Emissary.
Prometheus Operator with Helm
In this section, we will deploy the Prometheus Operator using Helm. Alternatively, you can install it with kubectl YAML if you prefer.
The default Helm Chart will install Prometheus and configure it to monitor your Kubernetes cluster.
This section will focus on setting up Prometheus to scrape stats from Emissary. Configuration of the Helm Chart and analysis of stats from other cluster components is outside of the scope of this documentation.
-
Install the Prometheus Operator from the helm chart
helm install -n prometheus stable/prometheus-operator
-
Create a
ServiceMonitor
The Prometheus Operator Helm chart creates a Prometheus instance that is looking for
ServiceMonitor
s withlabel: release=prometheus
.If you are running an Emissary version higher than 0.71.0 and want to scrape metrics directly from the
/metrics
endpoint of Emissary running in thedefault
namespace:--- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: ambassador-monitor namespace: monitoring labels: release: prometheus spec: namespaceSelector: matchNames: - default selector: matchLabels: service: ambassador-admin endpoints: - port: ambassador-admin
If you are scraping metrics from a
statsd-sink
deployment:--- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: statsd-monitor namespace: monitoring labels: release: prometheus spec: namespaceSelector: matchNames: - default selector: matchLabels: service: statsd-sink endpoints: - port: prometheus-metrics
Prometheus is now configured to gather metrics from Emissary.
Prometheus Operator CRDs
The Prometheus Operator creates a series of Kubernetes Custom Resource Definitions (CRDs) for managing Prometheus in Kubernetes.
Custom Resource Definition | Description |
---|---|
AlertManager |
An AlertManager handles alerts sent by the Prometheus server. |
PrometheusRule |
Registers altering and reporting rules with Prometheus. |
Prometheus |
Creates a Prometheus instance. |
ServiceMonitor |
Tells Prometheus where to scrape metrics from. |
CoreOS has published a full API reference to these different CRDs.
Grafana
Grafana is an open-source graphing tool for plotting data points. Grafana allows you to create dynamic dashboards for monitoring your ingress traffic stats collected from Prometheus.
We have published a sample
dashboard you can use
for monitoring your ingress traffic. Since the stats from the
/metrics
and /stats
endpoints are different, you will see a
section in the dashboard for each use case.
Note: If you deployed the Prometheus Operator via the Helm Chart,
a Grafana dashboard is created by default. You can use this dashboard
or set grafana.enabled: false
and follow the instructions below.
To deploy Grafana behind Emissary: replace
{{AMBASSADOR_IP}}
with the IP address or DNS name of your Emissary service, copy the YAML below, and apply it with kubectl
:
Note: If you forgot how to get the value of your AMBASSADOR_IP
or
have not set-up DNS, you can get the IP by using the kubectl get services -n ambassador
command, and select the External-IP of your Emissary LoadBalancer service.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: grafana
labels:
app: grafana
component: core
spec:
replicas: 1
selector:
matchLabels:
app: grafana
component: core
template:
metadata:
creationTimestamp: null
labels:
app: grafana
component: core
annotations:
sidecar.istio.io/inject: 'false'
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: grafana
image: 'grafana/grafana:6.4.3'
ports:
- containerPort: 3000
protocol: TCP
env:
- name: GF_SERVER_ROOT_URL
value: {{ SCHEME }}://{{ AMBASSADOR_IP }}/grafana
- name: GRAFANA_PORT
value: '3000'
- name: GF_AUTH_BASIC_ENABLED
value: 'false'
- name: GF_AUTH_ANONYMOUS_ENABLED
value: 'true'
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_PATHS_DATA
value: /data/grafana
resources:
requests:
cpu: 10m
volumeMounts:
- name: data
mountPath: /data/grafana
readinessProbe:
httpGet:
path: /api/health
port: 3000
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
imagePullPolicy: IfNotPresent
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: grafana
component: core
Now, create a service and Mapping
to expose Grafana behind
Emissary:
Note: Don’t forget to replace {{GRAFANA_NAMESPACE}}
with the
namespace you deployed Grafana to. In our example we used the default namespace,
so for this example you would change it to grafana.default
or just grafana
.
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: grafana
spec:
prefix: /grafana/
service: grafana.{{GRAFANA_NAMESPACE}}
Now, access Grafana by going to {AMBASSADOR_IP}/grafana/
and logging
in with username: admin
: password: admin
.
Before you can import the Emissary dashboard. You need to add a data source.
From the Grafana home page, select Create your first data source
. Now,
select ‘Prometheus’. In the URL section, type in http://prometheus.default:9090
.
We deployed prometheus to the default namespace in our example, but if you
deployed it to a different namespace, make sure to replace default
with your
namespace. Press Save & Test
to confirm that the data source works.
Import the provided dashboard
by clicking the plus sign in the left side-bar, clicking Import
in the top left, and entering the dashboard ID(13758).
From here, select the Prometheus data source we created from the Prometheus
drop
down menu, and select import to finish adding the dashboard.
In the dashboard we just added, you should now be able to view graphs in the
Emissary Metrics Endpoint
tab.
Viewing stats/metrics
Above, you have created an environment where Emissary is handling ingress traffic, Prometheus is scraping and collecting statistics from Envoy, and Grafana is displaying these statistics in a dashboard.
You can easily view a sample of these statistics via the Grafana
dashboard at {AMBASSADOR_IP}/grafana/
and logging in with the
credentials above.
The example dashboard you installed above displays ’top line’ statistics about the API response codes, number of connections, connection length, and number of registered services.
To view the full set of stats available to Prometheus you can access the Prometheus UI by running:
kubectl port-forward service/prometheus 9090
and going to http://localhost:9090/
from a web browser
In the UI, click the dropdown and see all of the stats Prometheus is able to scrape from Emissary.
The Prometheus data model is, at its core, time-series based. Therefore, it makes it easy to represent rates, averages, peaks, minimums, and histograms. Review the Prometheus documentation for a full reference on how to work with this data model.
Additional install options
StatsD Exporter: Output statistics to Emissary
If running a pre-0.71.0
version of Emissary, you will need to
configure Envoy to output stats to a separate collector before being
scraped by Prometheus. You will use the Prometheus StatsD
Exporter to do this.
-
Deploy the StatsD Exporter in the
default
namespacekubectl apply -f https://app.getambassador.io/yaml/ambassador-docs/$version$/monitoring/statsd-sink.yaml
-
Configure Emissary to output statistics to
statsd
In the Emissary deployment, add the
STATSD_ENABLED
andSTATSD_HOST
environment variables to tell Emissary where to output stats.... env: - name: AMBASSADOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STATSD_ENABLED value: "true" - name: STATSD_HOST value: "statsd-sink.default.svc.cluster.local" ...
Emissary is now configured to output statistics to the Prometheus StatsD exporter.
ServiceMonitor
If you are scraping metrics from a statsd-sink
deployment, you will
configure the ServiceMonitor
to scrape from that deployment.
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: statsd-monitor
labels:
app: ambassador
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
service: statsd-sink
endpoints:
- port: prometheus-metrics
Dashboard
Now that you have metrics scraping from StatsD You can use this version of the dashboard (ID: 4698) configured to work with metrics scraped from StatsD or the metrics Endpoint. You can configure it the same way as the previous dashboard. Adding the prometheus data source is also required, so if you did not add that yet, make sure to configure it before adding the dashboard.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.