mirror of
https://github.com/prometheus/prometheus.git
synced 2024-11-10 23:54:05 -08:00
af91fb8e31
This is done by bucketing chunks by fingerprint. If the persisting to disk falls behind, more and more chunks are in the queue. As soon as there are "double hits", we will now persist both chunks in one go, doubling the disk throughput (assuming it is limited by disk seeks). Should even more pile up so that we end wit "triple hits", we will persist those first, and so on. Even if we have millions of time series, this will still help, assuming not all of them are growing with the same speed. Series that get many samples and/or are not very compressable will accumulate chunks faster, and they will soon get double- or triple-writes. To improve the chance of double writes, -storage.local.persistence-queue-capacity could be set to a higher value. However, that will slow down shutdown a lot (as the queue has to be worked through). So we leave it to the user to set it to a really high value. A more fundamental solution would be to checkpoint not only head chunks, but also chunks still in the persist queue. That would be quite complicated for a rather limited use-case (running many time series with high ingestion rate on slow spinning disks). |
||
---|---|---|
.. | ||
codable | ||
flock | ||
index | ||
chunk.go | ||
crashrecovery.go | ||
delta.go | ||
instrumentation.go | ||
interface.go | ||
locker.go | ||
locker_test.go | ||
persistence.go | ||
persistence_test.go | ||
preload.go | ||
series.go | ||
storage.go | ||
storage_test.go | ||
test_helpers.go |