Allowing to set a custom endpoint makes it easy to monitor targets on non AWS providers with EC2 compliant APIs.
Signed-off-by: Jannick Fahlbusch <git@jf-projects.de>
Especially for Kubernetes SD, this fixes a bug where the rendered
configuration says "api_server: null", which when read back is not
interpreted as an un-set API server (thus the default is not applied).
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* config: set target group source index during unmarshalling
Fixes issue #4214 where the scrape pool is unnecessarily reloaded for a
config reload where the config hasn't changed. Previously, the discovery
manager changed the static config after loading which caused the in-memory
config to differ from a freshly reloaded config.
Signed-off-by: Paul Gier <pgier@redhat.com>
* [issue #4214] Test that static targets are not modified by discovery manager
Signed-off-by: Paul Gier <pgier@redhat.com>
Relabelling rules can use this information to attach the name of the controller
that has created a pod.
In turn, this can be used to slice metrics by workload at query time, ie.
"Give me all metrics that have been created by the $name Deployment"
Signed-off-by: Damien Lespiau <damien@weave.works>
* promql: Rewrote tests with testutil for functions_test
Signed-off-by: Elif T. Kuş <elifkus@gmail.com>
* pkg/relabel: Rewrote tests with testutil for relabel_test
Signed-off-by: Elif T. Kuş <elifkus@gmail.com>
* discovery/consul: Rewrote tests with testutil for consul_test
Signed-off-by: Elif T. Kuş <elifkus@gmail.com>
* scrape: Rewrote tests with testutil for manager_test
Signed-off-by: Elif T. Kuş <elifkus@gmail.com>
- Do initial listing and syncing to scrape manager, then register event
handlers may lost events happening in listing and syncing (if it
lasted a long time). We should register event handlers at the very
begining, before processing just wait until informers synced (sync in
informer will list all objects and call OnUpdate event handler).
- Use a queue then we don't block event callbacks and an object will be
processed only once if added multiple times before it being processed.
- Fix bug in `serviceUpdate` in endpoints.go, we should build endpoints
when `exists && err == nil`. Add `^TestEndpointsDiscoveryWithService`
tests to test this feature.
Testing:
- Use `k8s.io/client-go` testing framework and fake implementations which are
more robust and reliable for testing.
- `Test\w+DiscoveryBeforeRun` are used to test objects created before
discoverer runs
- `Test\w+DiscoveryAdd\w+` are used to test adding objects
- `Test\w+DiscoveryDelete\w+` are used to test deleting objects
- `Test\w+DiscoveryUpdate\w+` are used to test updating objects
- `TestEndpointsDiscoveryWithService\w+` are used to test endpoints
events triggered by services
- `cache.DeletedFinalStateUnknown` related stuffs are removed, because
we don't care deleted objects in store, we only need its name to send
a specical `targetgroup.Group` to scrape manager
Signed-off-by: Yecheng Fu <cofyc.jackson@gmail.com>
This adds support for basic authentication which closes#3090
The support for specifying the client timeout was removed as discussed in https://github.com/prometheus/common/pull/123. Marathon was the only sd mechanism doing this and configuring the timeout is done through `Context`.
DC/OS uses a custom `Authorization` header for authenticating. This adds 2 new configuration properties to reflect this.
Existing configuration files that use the bearer token will no longer work. More work is required to make this backwards compatible.
* consul: improve consul service discovery
Related to #3711
- Add the ability to filter by tag and node-meta in an efficient way (`/catalog/services`
allow filtering by node-meta, and returns a `map[string]string` or `service`->`tags`).
Tags and nore-meta are also used in `/catalog/service` requests.
- Do not require a call to the catalog if services are specified by name. This is important
because on large cluster `/catalog/services` changes all the time.
- Add `allow_stale` configuration option to do stale reads. Non-stale
reads can be costly, even more when you are doing them to a remote
datacenter with 10k+ targets over WAN (which is common for federation).
- Add `refresh_interval` to minimize the strain on the catalog and on the
service endpoint. This is needed because of that kind of behavior from
consul: https://github.com/hashicorp/consul/issues/3712 and because a catalog
on a large cluster would basically change *all* the time. No need to discover
targets in 1sec if we scrape them every minute.
- Added plenty of unit tests.
Benchmarks
----------
```yaml
scrape_configs:
- job_name: prometheus
scrape_interval: 60s
static_configs:
- targets: ["127.0.0.1:9090"]
- job_name: "observability-by-tag"
scrape_interval: "60s"
metrics_path: "/metrics"
consul_sd_configs:
- server: consul.service.par.consul.prod.crto.in:8500
tag: marathon-user-observability # Used in After
refresh_interval: 30s # Used in After+delay
relabel_configs:
- source_labels: [__meta_consul_tags]
regex: ^(.*,)?marathon-user-observability(,.*)?$
action: keep
- job_name: "observability-by-name"
scrape_interval: "60s"
metrics_path: "/metrics"
consul_sd_configs:
- server: consul.service.par.consul.prod.crto.in:8500
services:
- observability-cerebro
- observability-portal-web
- job_name: "fake-fake-fake"
scrape_interval: "15s"
metrics_path: "/metrics"
consul_sd_configs:
- server: consul.service.par.consul.prod.crto.in:8500
services:
- fake-fake-fake
```
Note: tested with ~1200 services, ~5000 nodes.
| Resource | Empty | Before | After | After + delay |
| -------- |:-----:|:------:|:-----:|:-------------:|
|/service-discovery size|5K|85MiB|27k|27k|27k|
|`go_memstats_heap_objects`|100k|1M|120k|110k|
|`go_memstats_heap_alloc_bytes`|24MB|150MB|28MB|27MB|
|`rate(go_memstats_alloc_bytes_total[5m])`|0.2MB/s|28MB/s|2MB/s|0.3MB/s|
|`rate(process_cpu_seconds_total[5m])`|0.1%|15%|2%|0.01%|
|`process_open_fds`|16|*1236*|22|22|
|`rate(prometheus_sd_consul_rpc_duration_seconds_count{call="services"}[5m])`|~0|1|1|*0.03*|
|`rate(prometheus_sd_consul_rpc_duration_seconds_count{call="service"}[5m])`|0.1|*80*|0.5|0.5|
|`prometheus_target_sync_length_seconds{quantile="0.9",scrape_job="observability-by-tag"}`|N/A|200ms|0.2ms|0.2ms|
|Network bandwidth|~10kbps|~2.8Mbps|~1.6Mbps|~10kbps|
Filtering by tag using relabel_configs uses **100kiB and 23kiB/s per service per job** and quite a lot of CPU. Also sends and additional *1Mbps* of traffic to consul.
Being a little bit smarter about this reduces the overhead quite a lot.
Limiting the number of `/catalog/services` queries per second almost removes the overhead of service discovery.
* consul: tweak `refresh_interval` behavior
`refresh_interval` now does what is advertised in the documentation,
there won't be more that one update per `refresh_interval`. It now
defaults to 30s (which was also the current waitTime in the consul query).
This also make sure we don't wait another 30s if we already waited 29s
in the blocking call by substracting the number of elapsed seconds.
Hopefully this will do what people expect it does and will be safer
for existing consul infrastructures.
Based on https://groups.google.com/d/topic/prometheus-users/02kezHbuea4/discussion
Does not attempt to handle a situation where the server does not understand
EDNS0, however that is an unlikely case, and the behaviour of such ancient
systems is hard to predict in advance, so if it does come up, it will need
to be handled on a case-by-case basis.
There is currently no way to differentiate Windows instances from Linux
ones. This is needed when you have a mix of node_exporters /
wmi_exporters for OS-level metrics and you want to have them in separate
scrape jobs.
This change allows you to do just that. Example:
```
- job_name: 'node'
azure_sd_configs:
- <azure_sd_config>
relabel_configs:
- source_labels: [__meta_azure_machine_os_type]
regex: Linux
action: keep
```
The way the vendor'd AzureSDK provides to get the OsType is a bit
awkward - as far as I can tell, this information can only be gotten from
the startup disk. Newer versions of the SDK appear to improve this a
bit (by having OS information in the InstanceView), but the current way
still works.
* Fix Kubernetes endpoints SD for empty subsets
When an endpoints object has no associated pods (replica scaled to zero
for instance), the endpoints SD should return a target group with no
targets so that the SD manager propagates this information to the scrape
manager.
Fixes#3659
* Don't send nil target groups from the Kubernetes SD
This is to be consistent with the endpoints SD part.
* refactor: move targetGroup struct and CheckOverflow() to their own package
* refactor: move auth and security related structs to a utility package, fix import error in utility package
* refactor: Azure SD, remove SD struct from config
* refactor: DNS SD, remove SD struct from config into dns package
* refactor: ec2 SD, move SD struct from config into the ec2 package
* refactor: file SD, move SD struct from config to file discovery package
* refactor: gce, move SD struct from config to gce discovery package
* refactor: move HTTPClientConfig and URL into util/config, fix import error in httputil
* refactor: consul, move SD struct from config into consul discovery package
* refactor: marathon, move SD struct from config into marathon discovery package
* refactor: triton, move SD struct from config to triton discovery package, fix test
* refactor: zookeeper, move SD structs from config to zookeeper discovery package
* refactor: openstack, remove SD struct from config, move into openstack discovery package
* refactor: kubernetes, move SD struct from config into kubernetes discovery package
* refactor: notifier, use targetgroup package instead of config
* refactor: tests for file, marathon, triton SD - use targetgroup package instead of config.TargetGroup
* refactor: retrieval, use targetgroup package instead of config.TargetGroup
* refactor: storage, use config util package
* refactor: discovery manager, use targetgroup package instead of config.TargetGroup
* refactor: use HTTPClient and TLS config from configUtil instead of config
* refactor: tests, use targetgroup package instead of config.TargetGroup
* refactor: fix tagetgroup.Group pointers that were removed by mistake
* refactor: openstack, kubernetes: drop prefixes
* refactor: remove import aliases forced due to vscode bug
* refactor: move main SD struct out of config into discovery/config
* refactor: rename configUtil to config_util
* refactor: rename yamlUtil to yaml_config
* refactor: kubernetes, remove prefixes
* refactor: move the TargetGroup package to discovery/
* refactor: fix order of imports
remove some select state that is most likely obsoleete and hoepfully doesn't braje anything :)
merge targets will sort by Discoverer name so we can have consistent tests for the maps.