Also, remove unused `providers` field in targetSet.
If the config file changes, we recreate all providers (by calling
`providersFromConfig`) and retrieve all targets anew from the newly
created providers. From that perspective, it cannot harm to clean up
the target group map in the targetSet. Not doing so (as it was the
case so far) keeps stale targets around. This mattered if an existing
key in the target group map was not overwritten in the initial fetch
of all targets from the providers. Examples where that mattered:
```
scrape_configs:
- job_name: "foo"
static_configs:
- targets: ["foo:9090"]
- targets: ["bar:9090"]
```
updated to:
```
scrape_configs:
- job_name: "foo"
static_configs:
- targets: ["foo:9090"]
```
`bar:9090` would still be monitored. (The static provider just
enumerates the target groups. If the number of target groups
decreases, the old ones stay around.
```
scrape_configs:
- job_name: "foo"
dns_sd_configs:
- names:
- "srv.name.one.example.org"
```
updated to:
```
scrape_configs:
- job_name: "foo"
dns_sd_configs:
- names:
- "srv.name.two.example.org"
```
Now both SRV records are still monitored. The SRV name is part of the
key in the target group map, thus the new one is just added and the
old ane stays around.
Obviously, this should have tests, and should have tests before, not
only for this case. This is the quick fix. I have created
https://github.com/prometheus/prometheus/issues/1906 to track test
creation.
Fixes https://github.com/prometheus/prometheus/issues/1610 .
In https://github.com/prometheus/prometheus/pull/1782 , we moved to a
custom flag set to avoid getting test flags into the main prometheus
binary. However, that removed the logging flags, too. This commit
updates the vendoring to a version of the log package that allows
adding the log flags to our flag set explicitly.
If the chunks of a series in the checkpoint are all older then the
latest chunk on disk, the head chunk is persisted and therefore has to
be declared closed.
It would be great to have a test for this, but that would require more
plumbing, subject of #447.
This adds `role` field to the Kubernetes SD config, which indicates
which type of Kubernetes SD should be run.
This no longer allows discovering pods and nodes with the same SD
configuration for example.
This removes several outdated or unnecessary ignore patterns.
Especially those that match random words such as 'local' or 'core',
which repeatedly caused weird behavior that's hard to debug, e.g.
invisble vendored files.