mirror of
https://github.com/prometheus/prometheus.git
synced 2024-12-26 22:19:40 -08:00
896f951e68
* Force buckets in a histogram to be monotonic for quantile estimation The assumption that bucket counts increase monotonically with increasing upperBound may be violated during: * Recording rule evaluation of histogram_quantile, especially when rate() has been applied to the underlying bucket timeseries. * Evaluation of histogram_quantile computed over federated bucket timeseries, especially when rate() has been applied This is because scraped data is not made available to RR evalution or federation atomically, so some buckets are computed with data from the N most recent scrapes, but the other buckets are missing the most recent observations. Monotonicity is usually guaranteed because if a bucket with upper bound u1 has count c1, then any bucket with a higher upper bound u > u1 must have counted all c1 observations and perhaps more, so that c >= c1. Randomly interspersed partial sampling breaks that guarantee, and rate() exacerbates it. Specifically, suppose bucket le=1000 has a count of 10 from 4 samples but the bucket with le=2000 has a count of 7, from 3 samples. The monotonicity is broken. It is exacerbated by rate() because under normal operation, cumulative counting of buckets will cause the bucket counts to diverge such that small differences from missing samples are not a problem. rate() removes this divergence.) bucketQuantile depends on that monotonicity to do a binary search for the bucket with the qth percentile count, so breaking the monotonicity guarantee causes bucketQuantile() to return undefined (nonsense) results. As a somewhat hacky solution until the Prometheus project is ready to accept the changes required to make scrapes atomic, we calculate the "envelope" of the histogram buckets, essentially removing any decreases in the count between successive buckets. * Fix up comment docs for ensureMonotonic * ensureMonotonic: Use switch statement Use switch statement rather than if/else for better readability. Process the most frequent cases first.
160 lines
6.4 KiB
Plaintext
160 lines
6.4 KiB
Plaintext
# Two histograms with 4 buckets each (x_sum and x_count not included,
|
|
# only buckets). Lowest bucket for one histogram < 0, for the other >
|
|
# 0. They have the same name, just separated by label. Not useful in
|
|
# practice, but can happen (if clients change bucketing), and the
|
|
# server has to cope with it.
|
|
|
|
# Test histogram.
|
|
load 5m
|
|
testhistogram_bucket{le="0.1", start="positive"} 0+5x10
|
|
testhistogram_bucket{le=".2", start="positive"} 0+7x10
|
|
testhistogram_bucket{le="1e0", start="positive"} 0+11x10
|
|
testhistogram_bucket{le="+Inf", start="positive"} 0+12x10
|
|
testhistogram_bucket{le="-.2", start="negative"} 0+1x10
|
|
testhistogram_bucket{le="-0.1", start="negative"} 0+2x10
|
|
testhistogram_bucket{le="0.3", start="negative"} 0+2x10
|
|
testhistogram_bucket{le="+Inf", start="negative"} 0+3x10
|
|
|
|
|
|
# Now a more realistic histogram per job and instance to test aggregation.
|
|
load 5m
|
|
request_duration_seconds_bucket{job="job1", instance="ins1", le="0.1"} 0+1x10
|
|
request_duration_seconds_bucket{job="job1", instance="ins1", le="0.2"} 0+3x10
|
|
request_duration_seconds_bucket{job="job1", instance="ins1", le="+Inf"} 0+4x10
|
|
request_duration_seconds_bucket{job="job1", instance="ins2", le="0.1"} 0+2x10
|
|
request_duration_seconds_bucket{job="job1", instance="ins2", le="0.2"} 0+5x10
|
|
request_duration_seconds_bucket{job="job1", instance="ins2", le="+Inf"} 0+6x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins1", le="0.1"} 0+3x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins1", le="0.2"} 0+4x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins1", le="+Inf"} 0+6x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins2", le="0.1"} 0+4x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins2", le="0.2"} 0+7x10
|
|
request_duration_seconds_bucket{job="job2", instance="ins2", le="+Inf"} 0+9x10
|
|
|
|
|
|
# Quantile too low.
|
|
eval instant at 50m histogram_quantile(-0.1, testhistogram_bucket)
|
|
{start="positive"} -Inf
|
|
{start="negative"} -Inf
|
|
|
|
# Quantile too high.
|
|
eval instant at 50m histogram_quantile(1.01, testhistogram_bucket)
|
|
{start="positive"} +Inf
|
|
{start="negative"} +Inf
|
|
|
|
# Quantile value in lowest bucket, which is positive.
|
|
eval instant at 50m histogram_quantile(0, testhistogram_bucket{start="positive"})
|
|
{start="positive"} 0
|
|
|
|
# Quantile value in lowest bucket, which is negative.
|
|
eval instant at 50m histogram_quantile(0, testhistogram_bucket{start="negative"})
|
|
{start="negative"} -0.2
|
|
|
|
# Quantile value in highest bucket.
|
|
eval instant at 50m histogram_quantile(1, testhistogram_bucket)
|
|
{start="positive"} 1
|
|
{start="negative"} 0.3
|
|
|
|
# Finally some useful quantiles.
|
|
eval instant at 50m histogram_quantile(0.2, testhistogram_bucket)
|
|
{start="positive"} 0.048
|
|
{start="negative"} -0.2
|
|
|
|
|
|
eval instant at 50m histogram_quantile(0.5, testhistogram_bucket)
|
|
{start="positive"} 0.15
|
|
{start="negative"} -0.15
|
|
|
|
eval instant at 50m histogram_quantile(0.8, testhistogram_bucket)
|
|
{start="positive"} 0.72
|
|
{start="negative"} 0.3
|
|
|
|
# More realistic with rates.
|
|
eval instant at 50m histogram_quantile(0.2, rate(testhistogram_bucket[5m]))
|
|
{start="positive"} 0.048
|
|
{start="negative"} -0.2
|
|
|
|
eval instant at 50m histogram_quantile(0.5, rate(testhistogram_bucket[5m]))
|
|
{start="positive"} 0.15
|
|
{start="negative"} -0.15
|
|
|
|
eval instant at 50m histogram_quantile(0.8, rate(testhistogram_bucket[5m]))
|
|
{start="positive"} 0.72
|
|
{start="negative"} 0.3
|
|
|
|
# Aggregated histogram: Everything in one.
|
|
eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le))
|
|
{} 0.075
|
|
|
|
eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le))
|
|
{} 0.1277777777777778
|
|
|
|
# Aggregated histogram: Everything in one. Now with avg, which does not change anything.
|
|
eval instant at 50m histogram_quantile(0.3, avg(rate(request_duration_seconds_bucket[5m])) by (le))
|
|
{} 0.075
|
|
|
|
eval instant at 50m histogram_quantile(0.5, avg(rate(request_duration_seconds_bucket[5m])) by (le))
|
|
{} 0.12777777777777778
|
|
|
|
# Aggregated histogram: By job.
|
|
eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, instance))
|
|
{instance="ins1"} 0.075
|
|
{instance="ins2"} 0.075
|
|
|
|
eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, instance))
|
|
{instance="ins1"} 0.1333333333
|
|
{instance="ins2"} 0.125
|
|
|
|
# Aggregated histogram: By instance.
|
|
eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, job))
|
|
{job="job1"} 0.1
|
|
{job="job2"} 0.0642857142857143
|
|
|
|
eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, job))
|
|
{job="job1"} 0.14
|
|
{job="job2"} 0.1125
|
|
|
|
# Aggregated histogram: By job and instance.
|
|
eval instant at 50m histogram_quantile(0.3, sum(rate(request_duration_seconds_bucket[5m])) by (le, job, instance))
|
|
{instance="ins1", job="job1"} 0.11
|
|
{instance="ins2", job="job1"} 0.09
|
|
{instance="ins1", job="job2"} 0.06
|
|
{instance="ins2", job="job2"} 0.0675
|
|
|
|
eval instant at 50m histogram_quantile(0.5, sum(rate(request_duration_seconds_bucket[5m])) by (le, job, instance))
|
|
{instance="ins1", job="job1"} 0.15
|
|
{instance="ins2", job="job1"} 0.1333333333333333
|
|
{instance="ins1", job="job2"} 0.1
|
|
{instance="ins2", job="job2"} 0.1166666666666667
|
|
|
|
# The unaggregated histogram for comparison. Same result as the previous one.
|
|
eval instant at 50m histogram_quantile(0.3, rate(request_duration_seconds_bucket[5m]))
|
|
{instance="ins1", job="job1"} 0.11
|
|
{instance="ins2", job="job1"} 0.09
|
|
{instance="ins1", job="job2"} 0.06
|
|
{instance="ins2", job="job2"} 0.0675
|
|
|
|
eval instant at 50m histogram_quantile(0.5, rate(request_duration_seconds_bucket[5m]))
|
|
{instance="ins1", job="job1"} 0.15
|
|
{instance="ins2", job="job1"} 0.13333333333333333
|
|
{instance="ins1", job="job2"} 0.1
|
|
{instance="ins2", job="job2"} 0.11666666666666667
|
|
|
|
# A histogram with nonmonotonic bucket counts. This may happen when recording
|
|
# rule evaluation or federation races scrape ingestion, causing some buckets
|
|
# counts to be derived from fewer samples. The wrong answer we want to avoid
|
|
# is for histogram_quantile(0.99, nonmonotonic_bucket) to return ~1000 instead
|
|
# of 1.
|
|
|
|
load 5m
|
|
nonmonotonic_bucket{le="0.1"} 0+1x10
|
|
nonmonotonic_bucket{le="1"} 0+9x10
|
|
nonmonotonic_bucket{le="10"} 0+8x10
|
|
nonmonotonic_bucket{le="100"} 0+8x10
|
|
nonmonotonic_bucket{le="1000"} 0+9x10
|
|
nonmonotonic_bucket{le="+Inf"} 0+9x10
|
|
|
|
# Nonmonotonic buckets
|
|
eval instant at 50m histogram_quantile(0.99, nonmonotonic_bucket)
|
|
{} 0.989875
|