retrieval: Clean up target group map on config reload

Also, remove unused `providers` field in targetSet.

If the config file changes, we recreate all providers (by calling
`providersFromConfig`) and retrieve all targets anew from the newly
created providers. From that perspective, it cannot harm to clean up
the target group map in the targetSet. Not doing so (as it was the
case so far) keeps stale targets around. This mattered if an existing
key in the target group map was not overwritten in the initial fetch
of all targets from the providers. Examples where that mattered:

```
scrape_configs:
- job_name: "foo"
  static_configs:
  - targets: ["foo:9090"]
  - targets: ["bar:9090"]
```
updated to:
```
scrape_configs:
- job_name: "foo"
  static_configs:
  - targets: ["foo:9090"]
```

`bar:9090` would still be monitored. (The static provider just
enumerates the target groups. If the number of target groups
decreases, the old ones stay around.

```
scrape_configs:
- job_name: "foo"
  dns_sd_configs:
  - names:
    - "srv.name.one.example.org"
```
updated to:
```
scrape_configs:
- job_name: "foo"
  dns_sd_configs:
  - names:
    - "srv.name.two.example.org"
```

Now both SRV records are still monitored. The SRV name is part of the
key in the target group map, thus the new one is just added and the
old ane stays around.

Obviously, this should have tests, and should have tests before, not
only for this case. This is the quick fix. I have created
https://github.com/prometheus/prometheus/issues/1906 to track test
creation.

Fixes https://github.com/prometheus/prometheus/issues/1610 .
This commit is contained in:
beorn7 2016-08-22 19:25:33 +02:00
parent be4019065c
commit e2b3626e0c

View file

@ -180,8 +180,7 @@ type targetSet struct {
mtx sync.RWMutex mtx sync.RWMutex
// Sets of targets by a source string that is unique across target providers. // Sets of targets by a source string that is unique across target providers.
tgroups map[string][]*Target tgroups map[string][]*Target
providers map[string]TargetProvider
scrapePool *scrapePool scrapePool *scrapePool
config *config.ScrapeConfig config *config.ScrapeConfig
@ -193,7 +192,6 @@ type targetSet struct {
func newTargetSet(cfg *config.ScrapeConfig, app storage.SampleAppender) *targetSet { func newTargetSet(cfg *config.ScrapeConfig, app storage.SampleAppender) *targetSet {
ts := &targetSet{ ts := &targetSet{
tgroups: map[string][]*Target{},
scrapePool: newScrapePool(cfg, app), scrapePool: newScrapePool(cfg, app),
syncCh: make(chan struct{}, 1), syncCh: make(chan struct{}, 1),
config: cfg, config: cfg,
@ -272,6 +270,11 @@ func (ts *targetSet) runProviders(ctx context.Context, providers map[string]Targ
} }
ctx, ts.cancelProviders = context.WithCancel(ctx) ctx, ts.cancelProviders = context.WithCancel(ctx)
// (Re-)create a fresh tgroups map to not keep stale targets around. We
// will retrieve all targets below anyway, so cleaning up everything is
// safe and doesn't inflict any additional cost.
ts.tgroups = map[string][]*Target{}
for name, prov := range providers { for name, prov := range providers {
wg.Add(1) wg.Add(1)