prometheus/documentation/examples/kubernetes-rabbitmq
TakumaNakagame 7a541bd9a7
fix document rabbitmq example (#7297)
* remove prometheus.io annotations and add scrape_configs

Signed-off-by: TakumaNakagame <5129906+TakumaNakagame@users.noreply.github.com>
2020-05-27 11:34:05 +01:00
..
rc.yml Fix typos and moving example to the correct place 2016-01-23 16:38:24 -02:00
README.md fix document rabbitmq example (#7297) 2020-05-27 11:34:05 +01:00
svc.yml example: Commented out annotation examples as they are meant only for example not as an idiomatic way of relabelling. 2018-05-02 13:42:23 +01:00

RabbitMQ Scraping

This is an example on how to setup RabbitMQ so Prometheus can scrape data from it. It uses a third party RabbitMQ exporter.

Since the RabbitMQ exporter needs to scrape the RabbitMQ management API to scrape data, and it defaults to localhost, it is easier to simply embed the kbudde/rabbitmq-exporter on the same pod as RabbitMQ, this way they share the same network.

With this pod running you will have the exporter scraping data, but Prometheus has not yet found the exporter and is not scraping data from it.

For more details on how to use Kubernetes service discovery take a look at the documentation and at the available examples.

After you have gotten Kubernetes service discovery up and running, add the following scrape_configs to promethues.yml.

scrape_configs:
- job_name: 'RabbitMQ'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels:
    - __meta_kubernetes_pod_label_app
    regex: rabbitmq
    action: keep

And you should be able to see your RabbitMQ exporter being scraped on the Prometheus status page. Since the IP that will be scraped will be the pod endpoint it is important that the node where Prometheus is running has access to the Kubernetes overlay network (flannel, Weave, AWS, or any of the other options that Kubernetes gives to you).