This can happen in the situation where the system scales up the number of shards massively (to deal with some backlog), then scales it down again as the number of samples sent during the time period is less than the number received.
The changes [1][] to Marathon service discovery to support multiple
ports mean that Prometheus now attempts to scrape all ports belonging to
a Marathon service.
You can use port definition or port mapping labels to filter out which
ports to scrape but that requires service owners to update their
Marathon configuration.
To allow for a smoother migration path, add a
`__meta_marathon_port_index` label, whose value is set to the port's
sequential index integer. For example, PORT0 has the value `0`, PORT1
has the value `1`, and so on.
This allows you to support scraping both the first available port (the
previous behaviour) in addition to ports with a `metrics` label.
For example, here's the relabel configuration we might use with
this patch:
- action: keep
source_labels: ['__meta_marathon_port_definition_label_metrics', '__meta_marathon_port_mapping_label_metrics', '__meta_marathon_port_index']
# Keep if port mapping or definition has a 'metrics' label with any
# non-empty value, or if no 'metrics' port label exists but this is the
# service's first available port
regex: ([^;]+;;[^;]+|;[^;]+;[^;]+|;;0)
This assumes that the Marathon API returns the ports in sorted order
(matching PORT0, PORT1, etc), which it appears that it does.
[1]: https://github.com/prometheus/prometheus/pull/2506
When decoding data from mmaped blocks, we would like to retrieve
a string backed by the mmaped region. As the underlying byte slice
never changes, this is safe.
We were still fsyncing while holding the write lock when we cut a new
segment. Given we cannot do anything but logging errors, we might just
as well complete segments asynchronously.
There's not realistic use case where one would fsync after every WAL
entry, thus make the default of a flush interval of 0 to never fsync
which is a much more likely use case.
This adds a bucketed buffer pool to the scrapers so we don't have to
allocate a new buffer on each scrape or hold it fixed to the scrape
loop.
The latter can consume significant amounts of unused memory, e.g. 4GB
when scraping 2MB /metrics from 2000 targets.