mirror of
https://github.com/prometheus/prometheus.git
synced 2024-12-26 06:04:05 -08:00
d2ab49c396
Also, set a much higher default value. Chunk persist requests can be quite spiky. If you collect a large number of time series that are very similar, they will tend to finish up a chunk at about the same time. There is no reason we need to back up scraping just because of that. The rationale of the new default value is "1/8 of the chunks in memory". |
||
---|---|---|
.. | ||
local | ||
metric | ||
remote |