It's possible (quite common on Kubernetes) to have a service discovery
return thousands of targets then drop most of them in relabel rules.
The main place this data is used is to display in the web UI, where
you don't want thousands of lines of display.
The new limit is `keep_dropped_targets`, which defaults to 0
for backwards-compatibility.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* k8s: Support discovery of ingresses
* Move additional labels below allocation
This makes it more obvious why the additional elements are allocated.
Also fix allocation for node where we only set a single label.
* k8s: Remove port from ingress discovery
* k8s: Add comment to ingress discovery example
Kubernetes 1.7+ no longer exposes cAdvisor metrics on the Kubelet
metrics endpoint. Update the example configuration to scrape cAdvisor
in addition to Kubelet. The provided configuration works for 1.7.3+
and commented notes are given for 1.7.2 and earlier versions.
Also remove the comment about node (Kubelet) CA not matching the master
CA. Since the example no longer connects directly to the nodes, it
doesn't matter what CA they're using.
References:
- https://github.com/kubernetes/kubernetes/issues/48483
- https://github.com/kubernetes/kubernetes/pull/49079
* scrape kubelet metrics via api node proxy
* add manifests to setup serviceaccount, clusterrole and clusterrolebinding to work with rbac
* removed .cluster.local and added newline to address comments
The Kubernetes pod SD creates a target for each declared port, as documented:
https://prometheus.io/docs/operating/configuration/#pod
> The pod role discovers all pods and exposes their containers as targets. For
> each declared port of a container, a single target is generated. If a
> container has no specified ports, a port-free target per container is created
> for manually adding a port via relabeling.
This results in the default port being the declared port, or no port if none are
declared.
This change corrects a bug introduced by PR
https://github.com/prometheus/prometheus/pull/2427
The regex uses three groups: the hostname, an optional port, and the
prefered port from a kubernetes annotation.
Previously, the second group should have been ignored if a :port was not
present in the input. However, making the port group optional with the
"?" had the unintended side-effect of allowing the hostname regex "(.+)"
to match greedily, which included the ":port" patterns up to the ";"
separating the hostname from the kubernetes port annotation.
This change updates the regex for the hostname to match any non-":"
characters. This forces the regex to stop if a ":port" is present and
allow the second group to match the optional port.
This change updates port relabeling for pod and service discovery so the
relabeling regex matches addresses with or without declared ports. As
well, this change uses a consistent style in the replacement pattern
for the two expressions.
Previously, for both services or pods that did not have declared ports, the
relabel config regex would fail to match:
__meta_kubernetes_service_annotation_prometheus_io_port
regex: (.+)(?::\d+);(\d+)
__meta_kubernetes_pod_annotation_prometheus_io_port
regex: (.+):(?:\d+);(\d+)
Both regexes expected a <host>:<port> pattern.
The new regex matches addresses with or without declared ports by making
the :<port> pattern optional.
__meta_kubernetes_service_annotation_prometheus_io_port
__meta_kubernetes_pod_annotation_prometheus_io_port
regex: (.+)(?::\d+)?;(\d+)
CA and Bearer Token are config of `kubernetes_sd_configs`, not the
`scrape_config`. Also updated misleading top-level comment and removed
unnecessary global config.
* Support multiple masters with retries against each master as required.
* Scrape masters' metrics.
* Add role meta label for node/service/master to make it easier for relabeling.