mirror of
https://github.com/prometheus/prometheus.git
synced 2024-09-19 23:37:31 -07:00
Compare commits
16 commits
37b5dd8061
...
98ace42ea4
Author | SHA1 | Date | |
---|---|---|---|
98ace42ea4 | |||
e480cf21eb | |||
df9916ef66 | |||
c7fb6188b4 | |||
6e899fbb16 | |||
aa6dd70812 | |||
96e5a94d29 | |||
6fcd225aee | |||
c36589a6dd | |||
546f780006 | |||
15cea39136 | |||
69619990f8 | |||
1949baffe3 | |||
e6678e4637 | |||
f743f7e6f2 | |||
53f8da58b3 |
|
@ -2,6 +2,7 @@
|
|||
|
||||
## unreleased
|
||||
|
||||
* [CHANGE] `holt_winters` is now called `double_exponential_smoothing` and moves behind the [experimental-promql-functions feature flag](https://prometheus.io/docs/prometheus/latest/feature_flags/#experimental-promql-functions). #14930
|
||||
* [BUGFIX] PromQL: Only return "possible non-counter" annotation when `rate` returns points. #14910
|
||||
|
||||
## 3.0.0-beta.0 / 2024-09-05
|
||||
|
|
|
@ -733,7 +733,7 @@ func dumpSamples(ctx context.Context, dbDir, sandboxDirRoot string, mint, maxt i
|
|||
for _, mset := range matcherSets {
|
||||
sets = append(sets, q.Select(ctx, true, nil, mset...))
|
||||
}
|
||||
ss = storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge)
|
||||
ss = storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge)
|
||||
} else {
|
||||
ss = q.Select(ctx, false, nil, matcherSets[0]...)
|
||||
}
|
||||
|
|
|
@ -326,45 +326,70 @@ With native histograms, aggregating everything works as usual without any `by` c
|
|||
|
||||
histogram_quantile(0.9, sum(rate(http_request_duration_seconds[10m])))
|
||||
|
||||
The `histogram_quantile()` function interpolates quantile values by
|
||||
assuming a linear distribution within a bucket.
|
||||
In the (common) case that a quantile value does not coincide with a bucket
|
||||
boundary, the `histogram_quantile()` function interpolates the quantile value
|
||||
within the bucket the quantile value falls into. For classic histograms, for
|
||||
native histograms with custom bucket boundaries, and for the zero bucket of
|
||||
other native histograms, it assumes a uniform distribution of observations
|
||||
within the bucket (also called _linear interpolation_). For the
|
||||
non-zero-buckets of native histograms with a standard exponential bucketing
|
||||
schema, the interpolation is done under the assumption that the samples within
|
||||
the bucket are distributed in a way that they would uniformly populate the
|
||||
buckets in a hypothetical histogram with higher resolution. (This is also
|
||||
called _exponential interpolation_.)
|
||||
|
||||
If `b` has 0 observations, `NaN` is returned. For φ < 0, `-Inf` is
|
||||
returned. For φ > 1, `+Inf` is returned. For φ = `NaN`, `NaN` is returned.
|
||||
|
||||
The following is only relevant for classic histograms: If `b` contains
|
||||
fewer than two buckets, `NaN` is returned. The highest bucket must have an
|
||||
upper bound of `+Inf`. (Otherwise, `NaN` is returned.) If a quantile is located
|
||||
in the highest bucket, the upper bound of the second highest bucket is
|
||||
returned. A lower limit of the lowest bucket is assumed to be 0 if the upper
|
||||
bound of that bucket is greater than
|
||||
0. In that case, the usual linear interpolation is applied within that
|
||||
bucket. Otherwise, the upper bound of the lowest bucket is returned for
|
||||
quantiles located in the lowest bucket.
|
||||
Special cases for classic histograms:
|
||||
|
||||
You can use `histogram_quantile(0, v instant-vector)` to get the estimated minimum value stored in
|
||||
a histogram.
|
||||
* If `b` contains fewer than two buckets, `NaN` is returned.
|
||||
* The highest bucket must have an upper bound of `+Inf`. (Otherwise, `NaN` is
|
||||
returned.)
|
||||
* If a quantile is located in the highest bucket, the upper bound of the second
|
||||
highest bucket is returned.
|
||||
* The lower limit of the lowest bucket is assumed to be 0 if the upper bound of
|
||||
that bucket is greater than 0. In that case, the usual linear interpolation
|
||||
is applied within that bucket. Otherwise, the upper bound of the lowest
|
||||
bucket is returned for quantiles located in the lowest bucket.
|
||||
|
||||
You can use `histogram_quantile(1, v instant-vector)` to get the estimated maximum value stored in
|
||||
a histogram.
|
||||
Special cases for native histograms (relevant for the exact interpolation
|
||||
happening within the zero bucket):
|
||||
|
||||
Buckets of classic histograms are cumulative. Therefore, the following should always be the case:
|
||||
* A zero bucket with finite width is assumed to contain no negative
|
||||
observations if the histogram has observations in positive buckets, but none
|
||||
in negative buckets.
|
||||
* A zero bucket with finite width is assumed to contain no positive
|
||||
observations if the histogram has observations in negative buckets, but none
|
||||
in positive buckets.
|
||||
|
||||
* The counts in the buckets are monotonically increasing (strictly non-decreasing).
|
||||
* A lack of observations between the upper limits of two consecutive buckets results in equal counts
|
||||
in those two buckets.
|
||||
You can use `histogram_quantile(0, v instant-vector)` to get the estimated
|
||||
minimum value stored in a histogram.
|
||||
|
||||
However, floating point precision issues (e.g. small discrepancies introduced by computing of buckets
|
||||
with `sum(rate(...))`) or invalid data might violate these assumptions. In that case,
|
||||
`histogram_quantile` would be unable to return meaningful results. To mitigate the issue,
|
||||
`histogram_quantile` assumes that tiny relative differences between consecutive buckets are happening
|
||||
because of floating point precision errors and ignores them. (The threshold to ignore a difference
|
||||
between two buckets is a trillionth (1e-12) of the sum of both buckets.) Furthermore, if there are
|
||||
non-monotonic bucket counts even after this adjustment, they are increased to the value of the
|
||||
previous buckets to enforce monotonicity. The latter is evidence for an actual issue with the input
|
||||
data and is therefore flagged with an informational annotation reading `input to histogram_quantile
|
||||
needed to be fixed for monotonicity`. If you encounter this annotation, you should find and remove
|
||||
the source of the invalid data.
|
||||
You can use `histogram_quantile(1, v instant-vector)` to get the estimated
|
||||
maximum value stored in a histogram.
|
||||
|
||||
Buckets of classic histograms are cumulative. Therefore, the following should
|
||||
always be the case:
|
||||
|
||||
* The counts in the buckets are monotonically increasing (strictly
|
||||
non-decreasing).
|
||||
* A lack of observations between the upper limits of two consecutive buckets
|
||||
results in equal counts in those two buckets.
|
||||
|
||||
However, floating point precision issues (e.g. small discrepancies introduced
|
||||
by computing of buckets with `sum(rate(...))`) or invalid data might violate
|
||||
these assumptions. In that case, `histogram_quantile` would be unable to return
|
||||
meaningful results. To mitigate the issue, `histogram_quantile` assumes that
|
||||
tiny relative differences between consecutive buckets are happening because of
|
||||
floating point precision errors and ignores them. (The threshold to ignore a
|
||||
difference between two buckets is a trillionth (1e-12) of the sum of both
|
||||
buckets.) Furthermore, if there are non-monotonic bucket counts even after this
|
||||
adjustment, they are increased to the value of the previous buckets to enforce
|
||||
monotonicity. The latter is evidence for an actual issue with the input data
|
||||
and is therefore flagged with an informational annotation reading `input to
|
||||
histogram_quantile needed to be fixed for monotonicity`. If you encounter this
|
||||
annotation, you should find and remove the source of the invalid data.
|
||||
|
||||
## `histogram_stddev()` and `histogram_stdvar()`
|
||||
|
||||
|
@ -380,15 +405,22 @@ do not show up in the returned vector.
|
|||
Similarly, `histogram_stdvar(v instant-vector)` returns the estimated standard
|
||||
variance of observations in a native histogram.
|
||||
|
||||
## `holt_winters()`
|
||||
## `double_exponential_smoothing()`
|
||||
|
||||
`holt_winters(v range-vector, sf scalar, tf scalar)` produces a smoothed value
|
||||
**This function has to be enabled via the [feature flag](../feature_flags.md#experimental-promql-functions) `--enable-feature=promql-experimental-functions`.**
|
||||
|
||||
`double_exponential_smoothing(v range-vector, sf scalar, tf scalar)` produces a smoothed value
|
||||
for time series based on the range in `v`. The lower the smoothing factor `sf`,
|
||||
the more importance is given to old data. The higher the trend factor `tf`, the
|
||||
more trends in the data is considered. Both `sf` and `tf` must be between 0 and
|
||||
1.
|
||||
For additional details, refer to [NIST Engineering Statistics Handbook](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc433.htm).
|
||||
In Prometheus V2 this function was called `holt_winters`. This caused confusion
|
||||
since the Holt-Winters method usually refers to triple exponential smoothing.
|
||||
Double exponential smoothing as implemented here is also referred to as "Holt
|
||||
Linear".
|
||||
|
||||
`holt_winters` should only be used with gauges.
|
||||
`double_exponential_smoothing` should only be used with gauges.
|
||||
|
||||
## `hour()`
|
||||
|
||||
|
|
|
@ -117,7 +117,7 @@ func rangeQueryCases() []benchCase {
|
|||
},
|
||||
// Holt-Winters and long ranges.
|
||||
{
|
||||
expr: "holt_winters(a_X[1d], 0.3, 0.3)",
|
||||
expr: "double_exponential_smoothing(a_X[1d], 0.3, 0.3)",
|
||||
},
|
||||
{
|
||||
expr: "changes(a_X[1d])",
|
||||
|
|
|
@ -350,7 +350,7 @@ func calcTrendValue(i int, tf, s0, s1, b float64) float64 {
|
|||
// data. A lower smoothing factor increases the influence of historical data. The trend factor (0 < tf < 1) affects
|
||||
// how trends in historical data will affect the current data. A higher trend factor increases the influence.
|
||||
// of trends. Algorithm taken from https://en.wikipedia.org/wiki/Exponential_smoothing titled: "Double exponential smoothing".
|
||||
func funcHoltWinters(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
|
||||
func funcDoubleExponentialSmoothing(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
|
||||
samples := vals[0].(Matrix)[0]
|
||||
|
||||
// The smoothing factor argument.
|
||||
|
@ -1657,82 +1657,82 @@ func funcYear(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper)
|
|||
|
||||
// FunctionCalls is a list of all functions supported by PromQL, including their types.
|
||||
var FunctionCalls = map[string]FunctionCall{
|
||||
"abs": funcAbs,
|
||||
"absent": funcAbsent,
|
||||
"absent_over_time": funcAbsentOverTime,
|
||||
"acos": funcAcos,
|
||||
"acosh": funcAcosh,
|
||||
"asin": funcAsin,
|
||||
"asinh": funcAsinh,
|
||||
"atan": funcAtan,
|
||||
"atanh": funcAtanh,
|
||||
"avg_over_time": funcAvgOverTime,
|
||||
"ceil": funcCeil,
|
||||
"changes": funcChanges,
|
||||
"clamp": funcClamp,
|
||||
"clamp_max": funcClampMax,
|
||||
"clamp_min": funcClampMin,
|
||||
"cos": funcCos,
|
||||
"cosh": funcCosh,
|
||||
"count_over_time": funcCountOverTime,
|
||||
"days_in_month": funcDaysInMonth,
|
||||
"day_of_month": funcDayOfMonth,
|
||||
"day_of_week": funcDayOfWeek,
|
||||
"day_of_year": funcDayOfYear,
|
||||
"deg": funcDeg,
|
||||
"delta": funcDelta,
|
||||
"deriv": funcDeriv,
|
||||
"exp": funcExp,
|
||||
"floor": funcFloor,
|
||||
"histogram_avg": funcHistogramAvg,
|
||||
"histogram_count": funcHistogramCount,
|
||||
"histogram_fraction": funcHistogramFraction,
|
||||
"histogram_quantile": funcHistogramQuantile,
|
||||
"histogram_sum": funcHistogramSum,
|
||||
"histogram_stddev": funcHistogramStdDev,
|
||||
"histogram_stdvar": funcHistogramStdVar,
|
||||
"holt_winters": funcHoltWinters,
|
||||
"hour": funcHour,
|
||||
"idelta": funcIdelta,
|
||||
"increase": funcIncrease,
|
||||
"irate": funcIrate,
|
||||
"label_replace": funcLabelReplace,
|
||||
"label_join": funcLabelJoin,
|
||||
"ln": funcLn,
|
||||
"log10": funcLog10,
|
||||
"log2": funcLog2,
|
||||
"last_over_time": funcLastOverTime,
|
||||
"mad_over_time": funcMadOverTime,
|
||||
"max_over_time": funcMaxOverTime,
|
||||
"min_over_time": funcMinOverTime,
|
||||
"minute": funcMinute,
|
||||
"month": funcMonth,
|
||||
"pi": funcPi,
|
||||
"predict_linear": funcPredictLinear,
|
||||
"present_over_time": funcPresentOverTime,
|
||||
"quantile_over_time": funcQuantileOverTime,
|
||||
"rad": funcRad,
|
||||
"rate": funcRate,
|
||||
"resets": funcResets,
|
||||
"round": funcRound,
|
||||
"scalar": funcScalar,
|
||||
"sgn": funcSgn,
|
||||
"sin": funcSin,
|
||||
"sinh": funcSinh,
|
||||
"sort": funcSort,
|
||||
"sort_desc": funcSortDesc,
|
||||
"sort_by_label": funcSortByLabel,
|
||||
"sort_by_label_desc": funcSortByLabelDesc,
|
||||
"sqrt": funcSqrt,
|
||||
"stddev_over_time": funcStddevOverTime,
|
||||
"stdvar_over_time": funcStdvarOverTime,
|
||||
"sum_over_time": funcSumOverTime,
|
||||
"tan": funcTan,
|
||||
"tanh": funcTanh,
|
||||
"time": funcTime,
|
||||
"timestamp": funcTimestamp,
|
||||
"vector": funcVector,
|
||||
"year": funcYear,
|
||||
"abs": funcAbs,
|
||||
"absent": funcAbsent,
|
||||
"absent_over_time": funcAbsentOverTime,
|
||||
"acos": funcAcos,
|
||||
"acosh": funcAcosh,
|
||||
"asin": funcAsin,
|
||||
"asinh": funcAsinh,
|
||||
"atan": funcAtan,
|
||||
"atanh": funcAtanh,
|
||||
"avg_over_time": funcAvgOverTime,
|
||||
"ceil": funcCeil,
|
||||
"changes": funcChanges,
|
||||
"clamp": funcClamp,
|
||||
"clamp_max": funcClampMax,
|
||||
"clamp_min": funcClampMin,
|
||||
"cos": funcCos,
|
||||
"cosh": funcCosh,
|
||||
"count_over_time": funcCountOverTime,
|
||||
"days_in_month": funcDaysInMonth,
|
||||
"day_of_month": funcDayOfMonth,
|
||||
"day_of_week": funcDayOfWeek,
|
||||
"day_of_year": funcDayOfYear,
|
||||
"deg": funcDeg,
|
||||
"delta": funcDelta,
|
||||
"deriv": funcDeriv,
|
||||
"exp": funcExp,
|
||||
"floor": funcFloor,
|
||||
"histogram_avg": funcHistogramAvg,
|
||||
"histogram_count": funcHistogramCount,
|
||||
"histogram_fraction": funcHistogramFraction,
|
||||
"histogram_quantile": funcHistogramQuantile,
|
||||
"histogram_sum": funcHistogramSum,
|
||||
"histogram_stddev": funcHistogramStdDev,
|
||||
"histogram_stdvar": funcHistogramStdVar,
|
||||
"double_exponential_smoothing": funcDoubleExponentialSmoothing,
|
||||
"hour": funcHour,
|
||||
"idelta": funcIdelta,
|
||||
"increase": funcIncrease,
|
||||
"irate": funcIrate,
|
||||
"label_replace": funcLabelReplace,
|
||||
"label_join": funcLabelJoin,
|
||||
"ln": funcLn,
|
||||
"log10": funcLog10,
|
||||
"log2": funcLog2,
|
||||
"last_over_time": funcLastOverTime,
|
||||
"mad_over_time": funcMadOverTime,
|
||||
"max_over_time": funcMaxOverTime,
|
||||
"min_over_time": funcMinOverTime,
|
||||
"minute": funcMinute,
|
||||
"month": funcMonth,
|
||||
"pi": funcPi,
|
||||
"predict_linear": funcPredictLinear,
|
||||
"present_over_time": funcPresentOverTime,
|
||||
"quantile_over_time": funcQuantileOverTime,
|
||||
"rad": funcRad,
|
||||
"rate": funcRate,
|
||||
"resets": funcResets,
|
||||
"round": funcRound,
|
||||
"scalar": funcScalar,
|
||||
"sgn": funcSgn,
|
||||
"sin": funcSin,
|
||||
"sinh": funcSinh,
|
||||
"sort": funcSort,
|
||||
"sort_desc": funcSortDesc,
|
||||
"sort_by_label": funcSortByLabel,
|
||||
"sort_by_label_desc": funcSortByLabelDesc,
|
||||
"sqrt": funcSqrt,
|
||||
"stddev_over_time": funcStddevOverTime,
|
||||
"stdvar_over_time": funcStdvarOverTime,
|
||||
"sum_over_time": funcSumOverTime,
|
||||
"tan": funcTan,
|
||||
"tanh": funcTanh,
|
||||
"time": funcTime,
|
||||
"timestamp": funcTimestamp,
|
||||
"vector": funcVector,
|
||||
"year": funcYear,
|
||||
}
|
||||
|
||||
// AtModifierUnsafeFunctions are the functions whose result
|
||||
|
|
|
@ -202,10 +202,11 @@ var Functions = map[string]*Function{
|
|||
ArgTypes: []ValueType{ValueTypeScalar, ValueTypeVector},
|
||||
ReturnType: ValueTypeVector,
|
||||
},
|
||||
"holt_winters": {
|
||||
Name: "holt_winters",
|
||||
ArgTypes: []ValueType{ValueTypeMatrix, ValueTypeScalar, ValueTypeScalar},
|
||||
ReturnType: ValueTypeVector,
|
||||
"double_exponential_smoothing": {
|
||||
Name: "double_exponential_smoothing",
|
||||
ArgTypes: []ValueType{ValueTypeMatrix, ValueTypeScalar, ValueTypeScalar},
|
||||
ReturnType: ValueTypeVector,
|
||||
Experimental: true,
|
||||
},
|
||||
"hour": {
|
||||
Name: "hour",
|
||||
|
|
6
promql/promqltest/testdata/functions.test
vendored
6
promql/promqltest/testdata/functions.test
vendored
|
@ -651,7 +651,7 @@ eval_ordered instant at 50m sort_by_label(node_uname_info, "release")
|
|||
node_uname_info{job="node_exporter", instance="4m5", release="1.11.3"} 100
|
||||
node_uname_info{job="node_exporter", instance="4m1000", release="1.111.3"} 100
|
||||
|
||||
# Tests for holt_winters
|
||||
# Tests for double_exponential_smoothing
|
||||
clear
|
||||
|
||||
# positive trends
|
||||
|
@ -661,7 +661,7 @@ load 10s
|
|||
http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300+80x1000
|
||||
http_requests{job="api-server", instance="1", group="canary"} 0+40x2000
|
||||
|
||||
eval instant at 8000s holt_winters(http_requests[1m], 0.01, 0.1)
|
||||
eval instant at 8000s double_exponential_smoothing(http_requests[1m], 0.01, 0.1)
|
||||
{job="api-server", instance="0", group="production"} 8000
|
||||
{job="api-server", instance="1", group="production"} 16000
|
||||
{job="api-server", instance="0", group="canary"} 24000
|
||||
|
@ -675,7 +675,7 @@ load 10s
|
|||
http_requests{job="api-server", instance="0", group="canary"} 0+30x1000 300-80x1000
|
||||
http_requests{job="api-server", instance="1", group="canary"} 0-40x1000 0+40x1000
|
||||
|
||||
eval instant at 8000s holt_winters(http_requests[1m], 0.01, 0.1)
|
||||
eval instant at 8000s double_exponential_smoothing(http_requests[1m], 0.01, 0.1)
|
||||
{job="api-server", instance="0", group="production"} 0
|
||||
{job="api-server", instance="1", group="production"} -16000
|
||||
{job="api-server", instance="0", group="canary"} 24000
|
||||
|
|
232
promql/promqltest/testdata/native_histograms.test
vendored
232
promql/promqltest/testdata/native_histograms.test
vendored
|
@ -46,9 +46,12 @@ eval instant at 1m histogram_fraction(1, 2, single_histogram)
|
|||
eval instant at 1m histogram_fraction(0, 8, single_histogram)
|
||||
{} 1
|
||||
|
||||
# Median is 1.5 due to linear estimation of the midpoint of the middle bucket, whose values are within range 1 < x <= 2.
|
||||
# Median is 1.414213562373095 (2**2**-1, or sqrt(2)) due to
|
||||
# exponential interpolation, i.e. the "midpoint" within range 1 < x <=
|
||||
# 2 is assumed where the bucket boundary would be if we increased the
|
||||
# resolution of the histogram by one step.
|
||||
eval instant at 1m histogram_quantile(0.5, single_histogram)
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
clear
|
||||
|
||||
|
@ -68,8 +71,9 @@ eval instant at 5m histogram_avg(multi_histogram)
|
|||
eval instant at 5m histogram_fraction(1, 2, multi_histogram)
|
||||
{} 0.5
|
||||
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 5m histogram_quantile(0.5, multi_histogram)
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
|
||||
# Each entry should look the same as the first.
|
||||
|
@ -85,8 +89,9 @@ eval instant at 50m histogram_avg(multi_histogram)
|
|||
eval instant at 50m histogram_fraction(1, 2, multi_histogram)
|
||||
{} 0.5
|
||||
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 50m histogram_quantile(0.5, multi_histogram)
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
clear
|
||||
|
||||
|
@ -109,8 +114,9 @@ eval instant at 5m histogram_avg(incr_histogram)
|
|||
eval instant at 5m histogram_fraction(1, 2, incr_histogram)
|
||||
{} 0.6
|
||||
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 5m histogram_quantile(0.5, incr_histogram)
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
|
||||
eval instant at 50m incr_histogram
|
||||
|
@ -129,16 +135,18 @@ eval instant at 50m histogram_avg(incr_histogram)
|
|||
eval instant at 50m histogram_fraction(1, 2, incr_histogram)
|
||||
{} 0.8571428571428571
|
||||
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 50m histogram_quantile(0.5, incr_histogram)
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
# Per-second average rate of increase should be 1/(5*60) for count and buckets, then 2/(5*60) for sum.
|
||||
eval instant at 50m rate(incr_histogram[10m])
|
||||
{} {{count:0.0033333333333333335 sum:0.006666666666666667 offset:1 buckets:[0.0033333333333333335]}}
|
||||
|
||||
# Calculate the 50th percentile of observations over the last 10m.
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 50m histogram_quantile(0.5, rate(incr_histogram[10m]))
|
||||
{} 1.5
|
||||
{} 1.414213562373095
|
||||
|
||||
clear
|
||||
|
||||
|
@ -211,8 +219,9 @@ eval instant at 1m histogram_avg(negative_histogram)
|
|||
eval instant at 1m histogram_fraction(-2, -1, negative_histogram)
|
||||
{} 0.5
|
||||
|
||||
# Exponential interpolation works the same as for positive buckets, just mirrored.
|
||||
eval instant at 1m histogram_quantile(0.5, negative_histogram)
|
||||
{} -1.5
|
||||
{} -1.414213562373095
|
||||
|
||||
clear
|
||||
|
||||
|
@ -233,8 +242,9 @@ eval instant at 5m histogram_avg(two_samples_histogram)
|
|||
eval instant at 5m histogram_fraction(-2, -1, two_samples_histogram)
|
||||
{} 0.5
|
||||
|
||||
# See explanation for exponential interpolation above.
|
||||
eval instant at 5m histogram_quantile(0.5, two_samples_histogram)
|
||||
{} -1.5
|
||||
{} -1.414213562373095
|
||||
|
||||
clear
|
||||
|
||||
|
@ -392,20 +402,24 @@ eval_warn instant at 10m histogram_quantile(1.001, histogram_quantile_1)
|
|||
eval instant at 10m histogram_quantile(1, histogram_quantile_1)
|
||||
{} 16
|
||||
|
||||
# The following quantiles are within a bucket. Exponential
|
||||
# interpolation is applied (rather than linear, as it is done for
|
||||
# classic histograms), leading to slightly different quantile values.
|
||||
eval instant at 10m histogram_quantile(0.99, histogram_quantile_1)
|
||||
{} 15.759999999999998
|
||||
{} 15.67072476139083
|
||||
|
||||
eval instant at 10m histogram_quantile(0.9, histogram_quantile_1)
|
||||
{} 13.600000000000001
|
||||
{} 12.99603834169977
|
||||
|
||||
eval instant at 10m histogram_quantile(0.6, histogram_quantile_1)
|
||||
{} 4.799999999999997
|
||||
{} 4.594793419988138
|
||||
|
||||
eval instant at 10m histogram_quantile(0.5, histogram_quantile_1)
|
||||
{} 1.6666666666666665
|
||||
{} 1.5874010519681994
|
||||
|
||||
# Linear interpolation within the zero bucket after all.
|
||||
eval instant at 10m histogram_quantile(0.1, histogram_quantile_1)
|
||||
{} 0.0006000000000000001
|
||||
{} 0.0006
|
||||
|
||||
eval instant at 10m histogram_quantile(0, histogram_quantile_1)
|
||||
{} 0
|
||||
|
@ -425,17 +439,20 @@ eval_warn instant at 10m histogram_quantile(1.001, histogram_quantile_2)
|
|||
eval instant at 10m histogram_quantile(1, histogram_quantile_2)
|
||||
{} 0
|
||||
|
||||
# Again, the quantile values here are slightly different from what
|
||||
# they would be with linear interpolation. Note that quantiles
|
||||
# ending up in the zero bucket are linearly interpolated after all.
|
||||
eval instant at 10m histogram_quantile(0.99, histogram_quantile_2)
|
||||
{} -6.000000000000048e-05
|
||||
{} -0.00006
|
||||
|
||||
eval instant at 10m histogram_quantile(0.9, histogram_quantile_2)
|
||||
{} -0.0005999999999999996
|
||||
{} -0.0006
|
||||
|
||||
eval instant at 10m histogram_quantile(0.5, histogram_quantile_2)
|
||||
{} -1.6666666666666667
|
||||
{} -1.5874010519681996
|
||||
|
||||
eval instant at 10m histogram_quantile(0.1, histogram_quantile_2)
|
||||
{} -13.6
|
||||
{} -12.996038341699768
|
||||
|
||||
eval instant at 10m histogram_quantile(0, histogram_quantile_2)
|
||||
{} -16
|
||||
|
@ -445,7 +462,9 @@ eval_warn instant at 10m histogram_quantile(-1, histogram_quantile_2)
|
|||
|
||||
clear
|
||||
|
||||
# Apply quantile function to histogram with both positive and negative buckets with zero bucket.
|
||||
# Apply quantile function to histogram with both positive and negative
|
||||
# buckets with zero bucket.
|
||||
# First positive buckets with exponential interpolation.
|
||||
load 10m
|
||||
histogram_quantile_3 {{schema:0 count:24 sum:100 z_bucket:4 z_bucket_w:0.001 buckets:[2 3 0 1 4] n_buckets:[2 3 0 1 4]}}x1
|
||||
|
||||
|
@ -456,31 +475,34 @@ eval instant at 10m histogram_quantile(1, histogram_quantile_3)
|
|||
{} 16
|
||||
|
||||
eval instant at 10m histogram_quantile(0.99, histogram_quantile_3)
|
||||
{} 15.519999999999996
|
||||
{} 15.34822590920423
|
||||
|
||||
eval instant at 10m histogram_quantile(0.9, histogram_quantile_3)
|
||||
{} 11.200000000000003
|
||||
{} 10.556063286183155
|
||||
|
||||
eval instant at 10m histogram_quantile(0.7, histogram_quantile_3)
|
||||
{} 1.2666666666666657
|
||||
{} 1.2030250360821164
|
||||
|
||||
# Linear interpolation in the zero bucket, symmetrically centered around
|
||||
# the zero point.
|
||||
eval instant at 10m histogram_quantile(0.55, histogram_quantile_3)
|
||||
{} 0.0006000000000000005
|
||||
{} 0.0006
|
||||
|
||||
eval instant at 10m histogram_quantile(0.5, histogram_quantile_3)
|
||||
{} 0
|
||||
|
||||
eval instant at 10m histogram_quantile(0.45, histogram_quantile_3)
|
||||
{} -0.0005999999999999996
|
||||
{} -0.0006
|
||||
|
||||
# Finally negative buckets with mirrored exponential interpolation.
|
||||
eval instant at 10m histogram_quantile(0.3, histogram_quantile_3)
|
||||
{} -1.266666666666667
|
||||
{} -1.2030250360821169
|
||||
|
||||
eval instant at 10m histogram_quantile(0.1, histogram_quantile_3)
|
||||
{} -11.2
|
||||
{} -10.556063286183155
|
||||
|
||||
eval instant at 10m histogram_quantile(0.01, histogram_quantile_3)
|
||||
{} -15.52
|
||||
{} -15.34822590920423
|
||||
|
||||
eval instant at 10m histogram_quantile(0, histogram_quantile_3)
|
||||
{} -16
|
||||
|
@ -490,6 +512,90 @@ eval_warn instant at 10m histogram_quantile(-1, histogram_quantile_3)
|
|||
|
||||
clear
|
||||
|
||||
# Try different schemas. (The interpolation logic must not depend on the schema.)
|
||||
clear
|
||||
load 1m
|
||||
var_res_histogram{schema="-1"} {{schema:-1 sum:6 count:5 buckets:[0 5]}}
|
||||
var_res_histogram{schema="0"} {{schema:0 sum:4 count:5 buckets:[0 5]}}
|
||||
var_res_histogram{schema="+1"} {{schema:1 sum:4 count:5 buckets:[0 5]}}
|
||||
|
||||
eval instant at 1m histogram_quantile(0.5, var_res_histogram)
|
||||
{schema="-1"} 2.0
|
||||
{schema="0"} 1.4142135623730951
|
||||
{schema="+1"} 1.189207
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 2, var_res_histogram{schema="-1"})
|
||||
{schema="-1"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 1.4142135623730951, var_res_histogram{schema="0"})
|
||||
{schema="0"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 1.189207, var_res_histogram{schema="+1"})
|
||||
{schema="+1"} 0.5
|
||||
|
||||
# The same as above, but one bucket "further to the right".
|
||||
clear
|
||||
load 1m
|
||||
var_res_histogram{schema="-1"} {{schema:-1 sum:6 count:5 buckets:[0 0 5]}}
|
||||
var_res_histogram{schema="0"} {{schema:0 sum:4 count:5 buckets:[0 0 5]}}
|
||||
var_res_histogram{schema="+1"} {{schema:1 sum:4 count:5 buckets:[0 0 5]}}
|
||||
|
||||
eval instant at 1m histogram_quantile(0.5, var_res_histogram)
|
||||
{schema="-1"} 8.0
|
||||
{schema="0"} 2.82842712474619
|
||||
{schema="+1"} 1.6817928305074292
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 8, var_res_histogram{schema="-1"})
|
||||
{schema="-1"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 2.82842712474619, var_res_histogram{schema="0"})
|
||||
{schema="0"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(0, 1.6817928305074292, var_res_histogram{schema="+1"})
|
||||
{schema="+1"} 0.5
|
||||
|
||||
# And everything again but for negative buckets.
|
||||
clear
|
||||
load 1m
|
||||
var_res_histogram{schema="-1"} {{schema:-1 sum:6 count:5 n_buckets:[0 5]}}
|
||||
var_res_histogram{schema="0"} {{schema:0 sum:4 count:5 n_buckets:[0 5]}}
|
||||
var_res_histogram{schema="+1"} {{schema:1 sum:4 count:5 n_buckets:[0 5]}}
|
||||
|
||||
eval instant at 1m histogram_quantile(0.5, var_res_histogram)
|
||||
{schema="-1"} -2.0
|
||||
{schema="0"} -1.4142135623730951
|
||||
{schema="+1"} -1.189207
|
||||
|
||||
eval instant at 1m histogram_fraction(-2, 0, var_res_histogram{schema="-1"})
|
||||
{schema="-1"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(-1.4142135623730951, 0, var_res_histogram{schema="0"})
|
||||
{schema="0"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(-1.189207, 0, var_res_histogram{schema="+1"})
|
||||
{schema="+1"} 0.5
|
||||
|
||||
clear
|
||||
load 1m
|
||||
var_res_histogram{schema="-1"} {{schema:-1 sum:6 count:5 n_buckets:[0 0 5]}}
|
||||
var_res_histogram{schema="0"} {{schema:0 sum:4 count:5 n_buckets:[0 0 5]}}
|
||||
var_res_histogram{schema="+1"} {{schema:1 sum:4 count:5 n_buckets:[0 0 5]}}
|
||||
|
||||
eval instant at 1m histogram_quantile(0.5, var_res_histogram)
|
||||
{schema="-1"} -8.0
|
||||
{schema="0"} -2.82842712474619
|
||||
{schema="+1"} -1.6817928305074292
|
||||
|
||||
eval instant at 1m histogram_fraction(-8, 0, var_res_histogram{schema="-1"})
|
||||
{schema="-1"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(-2.82842712474619, 0, var_res_histogram{schema="0"})
|
||||
{schema="0"} 0.5
|
||||
|
||||
eval instant at 1m histogram_fraction(-1.6817928305074292, 0, var_res_histogram{schema="+1"})
|
||||
{schema="+1"} 0.5
|
||||
|
||||
|
||||
# Apply fraction function to empty histogram.
|
||||
load 10m
|
||||
histogram_fraction_1 {{}}x1
|
||||
|
@ -515,11 +621,18 @@ eval instant at 10m histogram_fraction(-0.001, 0, histogram_fraction_2)
|
|||
eval instant at 10m histogram_fraction(0, 0.001, histogram_fraction_2)
|
||||
{} 0.16666666666666666
|
||||
|
||||
# Note that this result and the one above add up to 1.
|
||||
eval instant at 10m histogram_fraction(0.001, inf, histogram_fraction_2)
|
||||
{} 0.8333333333333334
|
||||
|
||||
# We are in the zero bucket, resulting in linear interpolation
|
||||
eval instant at 10m histogram_fraction(0, 0.0005, histogram_fraction_2)
|
||||
{} 0.08333333333333333
|
||||
|
||||
eval instant at 10m histogram_fraction(0.001, inf, histogram_fraction_2)
|
||||
{} 0.8333333333333334
|
||||
# Demonstrate that the inverse operation with histogram_quantile yields
|
||||
# the original value with the non-trivial result above.
|
||||
eval instant at 10m histogram_quantile(0.08333333333333333, histogram_fraction_2)
|
||||
{} 0.0005
|
||||
|
||||
eval instant at 10m histogram_fraction(-inf, -0.001, histogram_fraction_2)
|
||||
{} 0
|
||||
|
@ -527,17 +640,30 @@ eval instant at 10m histogram_fraction(-inf, -0.001, histogram_fraction_2)
|
|||
eval instant at 10m histogram_fraction(1, 2, histogram_fraction_2)
|
||||
{} 0.25
|
||||
|
||||
# More non-trivial results with interpolation involved below, including
|
||||
# some round-trips via histogram_quantile to prove that the inverse
|
||||
# operation leads to the same results.
|
||||
|
||||
eval instant at 10m histogram_fraction(0, 1.5, histogram_fraction_2)
|
||||
{} 0.4795739585136224
|
||||
|
||||
eval instant at 10m histogram_fraction(1.5, 2, histogram_fraction_2)
|
||||
{} 0.125
|
||||
{} 0.10375937481971091
|
||||
|
||||
eval instant at 10m histogram_fraction(1, 8, histogram_fraction_2)
|
||||
{} 0.3333333333333333
|
||||
|
||||
eval instant at 10m histogram_fraction(0, 6, histogram_fraction_2)
|
||||
{} 0.6320802083934297
|
||||
|
||||
eval instant at 10m histogram_quantile(0.6320802083934297, histogram_fraction_2)
|
||||
{} 6
|
||||
|
||||
eval instant at 10m histogram_fraction(1, 6, histogram_fraction_2)
|
||||
{} 0.2916666666666667
|
||||
{} 0.29874687506009634
|
||||
|
||||
eval instant at 10m histogram_fraction(1.5, 6, histogram_fraction_2)
|
||||
{} 0.16666666666666666
|
||||
{} 0.15250624987980724
|
||||
|
||||
eval instant at 10m histogram_fraction(-2, -1, histogram_fraction_2)
|
||||
{} 0
|
||||
|
@ -600,6 +726,12 @@ eval instant at 10m histogram_fraction(0, 0.001, histogram_fraction_3)
|
|||
eval instant at 10m histogram_fraction(-0.0005, 0, histogram_fraction_3)
|
||||
{} 0.08333333333333333
|
||||
|
||||
eval instant at 10m histogram_fraction(-inf, -0.0005, histogram_fraction_3)
|
||||
{} 0.9166666666666666
|
||||
|
||||
eval instant at 10m histogram_quantile(0.9166666666666666, histogram_fraction_3)
|
||||
{} -0.0005
|
||||
|
||||
eval instant at 10m histogram_fraction(0.001, inf, histogram_fraction_3)
|
||||
{} 0
|
||||
|
||||
|
@ -625,16 +757,22 @@ eval instant at 10m histogram_fraction(-2, -1, histogram_fraction_3)
|
|||
{} 0.25
|
||||
|
||||
eval instant at 10m histogram_fraction(-2, -1.5, histogram_fraction_3)
|
||||
{} 0.125
|
||||
{} 0.10375937481971091
|
||||
|
||||
eval instant at 10m histogram_fraction(-8, -1, histogram_fraction_3)
|
||||
{} 0.3333333333333333
|
||||
|
||||
eval instant at 10m histogram_fraction(-inf, -6, histogram_fraction_3)
|
||||
{} 0.36791979160657035
|
||||
|
||||
eval instant at 10m histogram_quantile(0.36791979160657035, histogram_fraction_3)
|
||||
{} -6
|
||||
|
||||
eval instant at 10m histogram_fraction(-6, -1, histogram_fraction_3)
|
||||
{} 0.2916666666666667
|
||||
{} 0.29874687506009634
|
||||
|
||||
eval instant at 10m histogram_fraction(-6, -1.5, histogram_fraction_3)
|
||||
{} 0.16666666666666666
|
||||
{} 0.15250624987980724
|
||||
|
||||
eval instant at 10m histogram_fraction(42, 3.1415, histogram_fraction_3)
|
||||
{} 0
|
||||
|
@ -684,6 +822,18 @@ eval instant at 10m histogram_fraction(0, 0.001, histogram_fraction_4)
|
|||
eval instant at 10m histogram_fraction(-0.0005, 0.0005, histogram_fraction_4)
|
||||
{} 0.08333333333333333
|
||||
|
||||
eval instant at 10m histogram_fraction(-inf, 0.0005, histogram_fraction_4)
|
||||
{} 0.5416666666666666
|
||||
|
||||
eval instant at 10m histogram_quantile(0.5416666666666666, histogram_fraction_4)
|
||||
{} 0.0005
|
||||
|
||||
eval instant at 10m histogram_fraction(-inf, -0.0005, histogram_fraction_4)
|
||||
{} 0.4583333333333333
|
||||
|
||||
eval instant at 10m histogram_quantile(0.4583333333333333, histogram_fraction_4)
|
||||
{} -0.0005
|
||||
|
||||
eval instant at 10m histogram_fraction(0.001, inf, histogram_fraction_4)
|
||||
{} 0.4166666666666667
|
||||
|
||||
|
@ -694,31 +844,31 @@ eval instant at 10m histogram_fraction(1, 2, histogram_fraction_4)
|
|||
{} 0.125
|
||||
|
||||
eval instant at 10m histogram_fraction(1.5, 2, histogram_fraction_4)
|
||||
{} 0.0625
|
||||
{} 0.051879687409855414
|
||||
|
||||
eval instant at 10m histogram_fraction(1, 8, histogram_fraction_4)
|
||||
{} 0.16666666666666666
|
||||
|
||||
eval instant at 10m histogram_fraction(1, 6, histogram_fraction_4)
|
||||
{} 0.14583333333333334
|
||||
{} 0.14937343753004825
|
||||
|
||||
eval instant at 10m histogram_fraction(1.5, 6, histogram_fraction_4)
|
||||
{} 0.08333333333333333
|
||||
{} 0.07625312493990366
|
||||
|
||||
eval instant at 10m histogram_fraction(-2, -1, histogram_fraction_4)
|
||||
{} 0.125
|
||||
|
||||
eval instant at 10m histogram_fraction(-2, -1.5, histogram_fraction_4)
|
||||
{} 0.0625
|
||||
{} 0.051879687409855456
|
||||
|
||||
eval instant at 10m histogram_fraction(-8, -1, histogram_fraction_4)
|
||||
{} 0.16666666666666666
|
||||
|
||||
eval instant at 10m histogram_fraction(-6, -1, histogram_fraction_4)
|
||||
{} 0.14583333333333334
|
||||
{} 0.14937343753004817
|
||||
|
||||
eval instant at 10m histogram_fraction(-6, -1.5, histogram_fraction_4)
|
||||
{} 0.08333333333333333
|
||||
{} 0.07625312493990362
|
||||
|
||||
eval instant at 10m histogram_fraction(42, 3.1415, histogram_fraction_4)
|
||||
{} 0
|
||||
|
|
|
@ -153,19 +153,31 @@ func bucketQuantile(q float64, buckets buckets) (float64, bool, bool) {
|
|||
|
||||
// histogramQuantile calculates the quantile 'q' based on the given histogram.
|
||||
//
|
||||
// The quantile value is interpolated assuming a linear distribution within a
|
||||
// bucket.
|
||||
// TODO(beorn7): Find an interpolation method that is a better fit for
|
||||
// exponential buckets (and think about configurable interpolation).
|
||||
// For custom buckets, the result is interpolated linearly, i.e. it is assumed
|
||||
// the observations are uniformly distributed within each bucket. (This is a
|
||||
// quite blunt assumption, but it is consistent with the interpolation method
|
||||
// used for classic histograms so far.)
|
||||
//
|
||||
// For exponential buckets, the interpolation is done under the assumption that
|
||||
// the samples within each bucket are distributed in a way that they would
|
||||
// uniformly populate the buckets in a hypothetical histogram with higher
|
||||
// resolution. For example, if the rank calculation suggests that the requested
|
||||
// quantile is right in the middle of the population of the (1,2] bucket, we
|
||||
// assume the quantile would be right at the bucket boundary between the two
|
||||
// buckets the (1,2] bucket would be divided into if the histogram had double
|
||||
// the resolution, which is 2**2**-1 = 1.4142... We call this exponential
|
||||
// interpolation.
|
||||
//
|
||||
// However, for a quantile that ends up in the zero bucket, this method isn't
|
||||
// very helpful (because there is an infinite number of buckets close to zero,
|
||||
// so we would have to assume zero as the result). Therefore, we return to
|
||||
// linear interpolation in the zero bucket.
|
||||
//
|
||||
// A natural lower bound of 0 is assumed if the histogram has only positive
|
||||
// buckets. Likewise, a natural upper bound of 0 is assumed if the histogram has
|
||||
// only negative buckets.
|
||||
// TODO(beorn7): Come to terms if we want that.
|
||||
//
|
||||
// There are a number of special cases (once we have a way to report errors
|
||||
// happening during evaluations of AST functions, we should report those
|
||||
// explicitly):
|
||||
// There are a number of special cases:
|
||||
//
|
||||
// If the histogram has 0 observations, NaN is returned.
|
||||
//
|
||||
|
@ -193,9 +205,9 @@ func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 {
|
|||
rank float64
|
||||
)
|
||||
|
||||
// if there are NaN observations in the histogram (h.Sum is NaN), use the forward iterator
|
||||
// if the q < 0.5, use the forward iterator
|
||||
// if the q >= 0.5, use the reverse iterator
|
||||
// If there are NaN observations in the histogram (h.Sum is NaN), use the forward iterator.
|
||||
// If q < 0.5, use the forward iterator.
|
||||
// If q >= 0.5, use the reverse iterator.
|
||||
if math.IsNaN(h.Sum) || q < 0.5 {
|
||||
it = h.AllBucketIterator()
|
||||
rank = q * h.Count
|
||||
|
@ -260,8 +272,29 @@ func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 {
|
|||
rank = count - rank
|
||||
}
|
||||
|
||||
// TODO(codesome): Use a better estimation than linear.
|
||||
return bucket.Lower + (bucket.Upper-bucket.Lower)*(rank/bucket.Count)
|
||||
// The fraction of how far we are into the current bucket.
|
||||
fraction := rank / bucket.Count
|
||||
|
||||
// Return linear interpolation for custom buckets and for quantiles that
|
||||
// end up in the zero bucket.
|
||||
if h.UsesCustomBuckets() || (bucket.Lower <= 0 && bucket.Upper >= 0) {
|
||||
return bucket.Lower + (bucket.Upper-bucket.Lower)*fraction
|
||||
}
|
||||
|
||||
// For exponential buckets, we interpolate on a logarithmic scale. On a
|
||||
// logarithmic scale, the exponential bucket boundaries (for any schema)
|
||||
// become linear (every bucket has the same width). Therefore, after
|
||||
// taking the logarithm of both bucket boundaries, we can use the
|
||||
// calculated fraction in the same way as for linear interpolation (see
|
||||
// above). Finally, we return to the normal scale by applying the
|
||||
// exponential function to the result.
|
||||
logLower := math.Log2(math.Abs(bucket.Lower))
|
||||
logUpper := math.Log2(math.Abs(bucket.Upper))
|
||||
if bucket.Lower > 0 { // Positive bucket.
|
||||
return math.Exp2(logLower + (logUpper-logLower)*fraction)
|
||||
}
|
||||
// Otherwise, we are in a negative bucket and have to mirror things.
|
||||
return -math.Exp2(logUpper + (logLower-logUpper)*(1-fraction))
|
||||
}
|
||||
|
||||
// histogramFraction calculates the fraction of observations between the
|
||||
|
@ -271,8 +304,8 @@ func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 {
|
|||
// histogramQuantile(0.9, h) returns 123.4, then histogramFraction(-Inf, 123.4, h)
|
||||
// returns 0.9.
|
||||
//
|
||||
// The same notes (and TODOs) with regard to interpolation and assumptions about
|
||||
// the zero bucket boundaries apply as for histogramQuantile.
|
||||
// The same notes with regard to interpolation and assumptions about the zero
|
||||
// bucket boundaries apply as for histogramQuantile.
|
||||
//
|
||||
// Whether either boundary is inclusive or exclusive doesn’t actually matter as
|
||||
// long as interpolation has to be performed anyway. In the case of a boundary
|
||||
|
@ -310,7 +343,35 @@ func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float6
|
|||
)
|
||||
for it.Next() {
|
||||
b := it.At()
|
||||
if b.Lower < 0 && b.Upper > 0 {
|
||||
zeroBucket := false
|
||||
|
||||
// interpolateLinearly is used for custom buckets to be
|
||||
// consistent with the linear interpolation known from classic
|
||||
// histograms. It is also used for the zero bucket.
|
||||
interpolateLinearly := func(v float64) float64 {
|
||||
return rank + b.Count*(v-b.Lower)/(b.Upper-b.Lower)
|
||||
}
|
||||
|
||||
// interpolateExponentially is using the same exponential
|
||||
// interpolation method as above for histogramQuantile. This
|
||||
// method is a better fit for exponential bucketing.
|
||||
interpolateExponentially := func(v float64) float64 {
|
||||
var (
|
||||
logLower = math.Log2(math.Abs(b.Lower))
|
||||
logUpper = math.Log2(math.Abs(b.Upper))
|
||||
logV = math.Log2(math.Abs(v))
|
||||
fraction float64
|
||||
)
|
||||
if v > 0 {
|
||||
fraction = (logV - logLower) / (logUpper - logLower)
|
||||
} else {
|
||||
fraction = 1 - ((logV - logUpper) / (logLower - logUpper))
|
||||
}
|
||||
return rank + b.Count*fraction
|
||||
}
|
||||
|
||||
if b.Lower <= 0 && b.Upper >= 0 {
|
||||
zeroBucket = true
|
||||
switch {
|
||||
case len(h.NegativeBuckets) == 0 && len(h.PositiveBuckets) > 0:
|
||||
// This is the zero bucket and the histogram has only
|
||||
|
@ -325,10 +386,12 @@ func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float6
|
|||
}
|
||||
}
|
||||
if !lowerSet && b.Lower >= lower {
|
||||
// We have hit the lower value at the lower bucket boundary.
|
||||
lowerRank = rank
|
||||
lowerSet = true
|
||||
}
|
||||
if !upperSet && b.Lower >= upper {
|
||||
// We have hit the upper value at the lower bucket boundary.
|
||||
upperRank = rank
|
||||
upperSet = true
|
||||
}
|
||||
|
@ -336,11 +399,21 @@ func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float6
|
|||
break
|
||||
}
|
||||
if !lowerSet && b.Lower < lower && b.Upper > lower {
|
||||
lowerRank = rank + b.Count*(lower-b.Lower)/(b.Upper-b.Lower)
|
||||
// The lower value is in this bucket.
|
||||
if h.UsesCustomBuckets() || zeroBucket {
|
||||
lowerRank = interpolateLinearly(lower)
|
||||
} else {
|
||||
lowerRank = interpolateExponentially(lower)
|
||||
}
|
||||
lowerSet = true
|
||||
}
|
||||
if !upperSet && b.Lower < upper && b.Upper > upper {
|
||||
upperRank = rank + b.Count*(upper-b.Lower)/(b.Upper-b.Lower)
|
||||
// The upper value is in this bucket.
|
||||
if h.UsesCustomBuckets() || zeroBucket {
|
||||
upperRank = interpolateLinearly(upper)
|
||||
} else {
|
||||
upperRank = interpolateExponentially(upper)
|
||||
}
|
||||
upperSet = true
|
||||
}
|
||||
if lowerSet && upperSet {
|
||||
|
|
105
storage/merge.go
105
storage/merge.go
|
@ -19,7 +19,6 @@ import (
|
|||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"slices"
|
||||
"sync"
|
||||
|
||||
"github.com/prometheus/prometheus/model/histogram"
|
||||
|
@ -136,13 +135,17 @@ func filterChunkQueriers(qs []ChunkQuerier) []ChunkQuerier {
|
|||
// Select returns a set of series that matches the given label matchers.
|
||||
func (q *mergeGenericQuerier) Select(ctx context.Context, sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) genericSeriesSet {
|
||||
seriesSets := make([]genericSeriesSet, 0, len(q.queriers))
|
||||
var limit int
|
||||
if hints != nil {
|
||||
limit = hints.Limit
|
||||
}
|
||||
if !q.concurrentSelect {
|
||||
for _, querier := range q.queriers {
|
||||
// We need to sort for merge to work.
|
||||
seriesSets = append(seriesSets, querier.Select(ctx, true, hints, matchers...))
|
||||
}
|
||||
return &lazyGenericSeriesSet{init: func() (genericSeriesSet, bool) {
|
||||
s := newGenericMergeSeriesSet(seriesSets, q.mergeFn)
|
||||
s := newGenericMergeSeriesSet(seriesSets, limit, q.mergeFn)
|
||||
return s, s.Next()
|
||||
}}
|
||||
}
|
||||
|
@ -170,7 +173,7 @@ func (q *mergeGenericQuerier) Select(ctx context.Context, sortSeries bool, hints
|
|||
seriesSets = append(seriesSets, r)
|
||||
}
|
||||
return &lazyGenericSeriesSet{init: func() (genericSeriesSet, bool) {
|
||||
s := newGenericMergeSeriesSet(seriesSets, q.mergeFn)
|
||||
s := newGenericMergeSeriesSet(seriesSets, limit, q.mergeFn)
|
||||
return s, s.Next()
|
||||
}}
|
||||
}
|
||||
|
@ -188,35 +191,44 @@ func (l labelGenericQueriers) SplitByHalf() (labelGenericQueriers, labelGenericQ
|
|||
// If matchers are specified the returned result set is reduced
|
||||
// to label values of metrics matching the matchers.
|
||||
func (q *mergeGenericQuerier) LabelValues(ctx context.Context, name string, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) {
|
||||
res, ws, err := q.lvals(ctx, q.queriers, name, hints, matchers...)
|
||||
res, ws, err := q.mergeResults(q.queriers, hints, func(q LabelQuerier) ([]string, annotations.Annotations, error) {
|
||||
return q.LabelValues(ctx, name, hints, matchers...)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("LabelValues() from merge generic querier for label %s: %w", name, err)
|
||||
}
|
||||
return res, ws, nil
|
||||
}
|
||||
|
||||
// lvals performs merge sort for LabelValues from multiple queriers.
|
||||
func (q *mergeGenericQuerier) lvals(ctx context.Context, lq labelGenericQueriers, n string, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) {
|
||||
// mergeResults performs merge sort on the results of invoking the resultsFn against multiple queriers.
|
||||
func (q *mergeGenericQuerier) mergeResults(lq labelGenericQueriers, hints *LabelHints, resultsFn func(q LabelQuerier) ([]string, annotations.Annotations, error)) ([]string, annotations.Annotations, error) {
|
||||
if lq.Len() == 0 {
|
||||
return nil, nil, nil
|
||||
}
|
||||
if lq.Len() == 1 {
|
||||
return lq.Get(0).LabelValues(ctx, n, hints, matchers...)
|
||||
return resultsFn(lq.Get(0))
|
||||
}
|
||||
a, b := lq.SplitByHalf()
|
||||
|
||||
var ws annotations.Annotations
|
||||
s1, w, err := q.lvals(ctx, a, n, hints, matchers...)
|
||||
s1, w, err := q.mergeResults(a, hints, resultsFn)
|
||||
ws.Merge(w)
|
||||
if err != nil {
|
||||
return nil, ws, err
|
||||
}
|
||||
s2, ws, err := q.lvals(ctx, b, n, hints, matchers...)
|
||||
s2, w, err := q.mergeResults(b, hints, resultsFn)
|
||||
ws.Merge(w)
|
||||
if err != nil {
|
||||
return nil, ws, err
|
||||
}
|
||||
return mergeStrings(s1, s2), ws, nil
|
||||
|
||||
s1 = truncateToLimit(s1, hints)
|
||||
s2 = truncateToLimit(s2, hints)
|
||||
|
||||
merged := mergeStrings(s1, s2)
|
||||
merged = truncateToLimit(merged, hints)
|
||||
|
||||
return merged, ws, nil
|
||||
}
|
||||
|
||||
func mergeStrings(a, b []string) []string {
|
||||
|
@ -248,33 +260,13 @@ func mergeStrings(a, b []string) []string {
|
|||
|
||||
// LabelNames returns all the unique label names present in all queriers in sorted order.
|
||||
func (q *mergeGenericQuerier) LabelNames(ctx context.Context, hints *LabelHints, matchers ...*labels.Matcher) ([]string, annotations.Annotations, error) {
|
||||
var (
|
||||
labelNamesMap = make(map[string]struct{})
|
||||
warnings annotations.Annotations
|
||||
)
|
||||
for _, querier := range q.queriers {
|
||||
names, wrn, err := querier.LabelNames(ctx, hints, matchers...)
|
||||
if wrn != nil {
|
||||
// TODO(bwplotka): We could potentially wrap warnings.
|
||||
warnings.Merge(wrn)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("LabelNames() from merge generic querier: %w", err)
|
||||
}
|
||||
for _, name := range names {
|
||||
labelNamesMap[name] = struct{}{}
|
||||
}
|
||||
res, ws, err := q.mergeResults(q.queriers, hints, func(q LabelQuerier) ([]string, annotations.Annotations, error) {
|
||||
return q.LabelNames(ctx, hints, matchers...)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("LabelNames() from merge generic querier: %w", err)
|
||||
}
|
||||
if len(labelNamesMap) == 0 {
|
||||
return nil, warnings, nil
|
||||
}
|
||||
|
||||
labelNames := make([]string, 0, len(labelNamesMap))
|
||||
for name := range labelNamesMap {
|
||||
labelNames = append(labelNames, name)
|
||||
}
|
||||
slices.Sort(labelNames)
|
||||
return labelNames, warnings, nil
|
||||
return res, ws, nil
|
||||
}
|
||||
|
||||
// Close releases the resources of the generic querier.
|
||||
|
@ -288,17 +280,25 @@ func (q *mergeGenericQuerier) Close() error {
|
|||
return errs.Err()
|
||||
}
|
||||
|
||||
func truncateToLimit(s []string, hints *LabelHints) []string {
|
||||
if hints != nil && hints.Limit > 0 && len(s) > hints.Limit {
|
||||
s = s[:hints.Limit]
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// VerticalSeriesMergeFunc returns merged series implementation that merges series with same labels together.
|
||||
// It has to handle time-overlapped series as well.
|
||||
type VerticalSeriesMergeFunc func(...Series) Series
|
||||
|
||||
// NewMergeSeriesSet returns a new SeriesSet that merges many SeriesSets together.
|
||||
func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) SeriesSet {
|
||||
// If limit is set, the SeriesSet will be limited up-to the limit. 0 means disabled.
|
||||
func NewMergeSeriesSet(sets []SeriesSet, limit int, mergeFunc VerticalSeriesMergeFunc) SeriesSet {
|
||||
genericSets := make([]genericSeriesSet, 0, len(sets))
|
||||
for _, s := range sets {
|
||||
genericSets = append(genericSets, &genericSeriesSetAdapter{s})
|
||||
}
|
||||
return &seriesSetAdapter{newGenericMergeSeriesSet(genericSets, (&seriesMergerAdapter{VerticalSeriesMergeFunc: mergeFunc}).Merge)}
|
||||
return &seriesSetAdapter{newGenericMergeSeriesSet(genericSets, limit, (&seriesMergerAdapter{VerticalSeriesMergeFunc: mergeFunc}).Merge)}
|
||||
}
|
||||
|
||||
// VerticalChunkSeriesMergeFunc returns merged chunk series implementation that merges potentially time-overlapping
|
||||
|
@ -308,12 +308,12 @@ func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) Seri
|
|||
type VerticalChunkSeriesMergeFunc func(...ChunkSeries) ChunkSeries
|
||||
|
||||
// NewMergeChunkSeriesSet returns a new ChunkSeriesSet that merges many SeriesSet together.
|
||||
func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet {
|
||||
func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, limit int, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet {
|
||||
genericSets := make([]genericSeriesSet, 0, len(sets))
|
||||
for _, s := range sets {
|
||||
genericSets = append(genericSets, &genericChunkSeriesSetAdapter{s})
|
||||
}
|
||||
return &chunkSeriesSetAdapter{newGenericMergeSeriesSet(genericSets, (&chunkSeriesMergerAdapter{VerticalChunkSeriesMergeFunc: mergeFunc}).Merge)}
|
||||
return &chunkSeriesSetAdapter{newGenericMergeSeriesSet(genericSets, limit, (&chunkSeriesMergerAdapter{VerticalChunkSeriesMergeFunc: mergeFunc}).Merge)}
|
||||
}
|
||||
|
||||
// genericMergeSeriesSet implements genericSeriesSet.
|
||||
|
@ -321,9 +321,11 @@ type genericMergeSeriesSet struct {
|
|||
currentLabels labels.Labels
|
||||
mergeFunc genericSeriesMergeFunc
|
||||
|
||||
heap genericSeriesSetHeap
|
||||
sets []genericSeriesSet
|
||||
currentSets []genericSeriesSet
|
||||
heap genericSeriesSetHeap
|
||||
sets []genericSeriesSet
|
||||
currentSets []genericSeriesSet
|
||||
seriesLimit int
|
||||
mergedSeries int // tracks the total number of series merged and returned.
|
||||
}
|
||||
|
||||
// newGenericMergeSeriesSet returns a new genericSeriesSet that merges (and deduplicates)
|
||||
|
@ -331,7 +333,8 @@ type genericMergeSeriesSet struct {
|
|||
// Each series set must return its series in labels order, otherwise
|
||||
// merged series set will be incorrect.
|
||||
// Overlapped situations are merged using provided mergeFunc.
|
||||
func newGenericMergeSeriesSet(sets []genericSeriesSet, mergeFunc genericSeriesMergeFunc) genericSeriesSet {
|
||||
// If seriesLimit is set, only limited series are returned.
|
||||
func newGenericMergeSeriesSet(sets []genericSeriesSet, seriesLimit int, mergeFunc genericSeriesMergeFunc) genericSeriesSet {
|
||||
if len(sets) == 1 {
|
||||
return sets[0]
|
||||
}
|
||||
|
@ -351,13 +354,19 @@ func newGenericMergeSeriesSet(sets []genericSeriesSet, mergeFunc genericSeriesMe
|
|||
}
|
||||
}
|
||||
return &genericMergeSeriesSet{
|
||||
mergeFunc: mergeFunc,
|
||||
sets: sets,
|
||||
heap: h,
|
||||
mergeFunc: mergeFunc,
|
||||
sets: sets,
|
||||
heap: h,
|
||||
seriesLimit: seriesLimit,
|
||||
}
|
||||
}
|
||||
|
||||
func (c *genericMergeSeriesSet) Next() bool {
|
||||
if c.seriesLimit > 0 && c.mergedSeries >= c.seriesLimit {
|
||||
// Exit early if seriesLimit is set.
|
||||
return false
|
||||
}
|
||||
|
||||
// Run in a loop because the "next" series sets may not be valid anymore.
|
||||
// If, for the current label set, all the next series sets come from
|
||||
// failed remote storage sources, we want to keep trying with the next label set.
|
||||
|
@ -393,12 +402,14 @@ func (c *genericMergeSeriesSet) Next() bool {
|
|||
|
||||
func (c *genericMergeSeriesSet) At() Labels {
|
||||
if len(c.currentSets) == 1 {
|
||||
c.mergedSeries++
|
||||
return c.currentSets[0].At()
|
||||
}
|
||||
series := make([]Labels, 0, len(c.currentSets))
|
||||
for _, seriesSet := range c.currentSets {
|
||||
series = append(series, seriesSet.At())
|
||||
}
|
||||
c.mergedSeries++
|
||||
return c.mergeFunc(series...)
|
||||
}
|
||||
|
||||
|
|
|
@ -1345,7 +1345,7 @@ func makeMergeSeriesSet(serieses [][]Series) SeriesSet {
|
|||
for i, s := range serieses {
|
||||
seriesSets[i] = &genericSeriesSetAdapter{NewMockSeriesSet(s...)}
|
||||
}
|
||||
return &seriesSetAdapter{newGenericMergeSeriesSet(seriesSets, (&seriesMergerAdapter{VerticalSeriesMergeFunc: ChainedSeriesMerge}).Merge)}
|
||||
return &seriesSetAdapter{newGenericMergeSeriesSet(seriesSets, 0, (&seriesMergerAdapter{VerticalSeriesMergeFunc: ChainedSeriesMerge}).Merge)}
|
||||
}
|
||||
|
||||
func benchmarkDrain(b *testing.B, makeSeriesSet func() SeriesSet) {
|
||||
|
@ -1390,6 +1390,34 @@ func BenchmarkMergeSeriesSet(b *testing.B) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkMergeLabelValuesWithLimit(b *testing.B) {
|
||||
var queriers []genericQuerier
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
var lbls []string
|
||||
for j := 0; j < 100000; j++ {
|
||||
lbls = append(lbls, fmt.Sprintf("querier_%d_label_%d", i, j))
|
||||
}
|
||||
q := &mockQuerier{resp: lbls}
|
||||
queriers = append(queriers, newGenericQuerierFrom(q))
|
||||
}
|
||||
|
||||
mergeQuerier := &mergeGenericQuerier{
|
||||
queriers: queriers, // Assume querying 5 blocks.
|
||||
mergeFn: func(l ...Labels) Labels {
|
||||
return l[0]
|
||||
},
|
||||
}
|
||||
|
||||
b.Run("benchmark", func(b *testing.B) {
|
||||
ctx := context.Background()
|
||||
hints := &LabelHints{
|
||||
Limit: 1000,
|
||||
}
|
||||
mergeQuerier.LabelValues(ctx, "name", hints)
|
||||
})
|
||||
}
|
||||
|
||||
func visitMockQueriers(t *testing.T, qr Querier, f func(t *testing.T, q *mockQuerier)) int {
|
||||
count := 0
|
||||
switch x := qr.(type) {
|
||||
|
@ -1428,6 +1456,7 @@ func TestMergeQuerierWithSecondaries_ErrorHandling(t *testing.T) {
|
|||
name string
|
||||
primaries []Querier
|
||||
secondaries []Querier
|
||||
limit int
|
||||
|
||||
expectedSelectsSeries []labels.Labels
|
||||
expectedLabels []string
|
||||
|
@ -1553,12 +1582,39 @@ func TestMergeQuerierWithSecondaries_ErrorHandling(t *testing.T) {
|
|||
expectedLabels: []string{"a", "b"},
|
||||
expectedWarnings: annotations.New().Add(warnStorage),
|
||||
},
|
||||
{
|
||||
name: "successful queriers with limit",
|
||||
primaries: []Querier{
|
||||
&mockQuerier{resp: []string{"a", "d"}, warnings: annotations.New().Add(warnStorage), err: nil},
|
||||
},
|
||||
secondaries: []Querier{
|
||||
&mockQuerier{resp: []string{"b", "c"}, warnings: annotations.New().Add(warnStorage), err: nil},
|
||||
},
|
||||
limit: 2,
|
||||
expectedSelectsSeries: []labels.Labels{
|
||||
labels.FromStrings("test", "a"),
|
||||
labels.FromStrings("test", "b"),
|
||||
},
|
||||
expectedLabels: []string{"a", "b"},
|
||||
expectedWarnings: annotations.New().Add(warnStorage),
|
||||
},
|
||||
} {
|
||||
var labelHints *LabelHints
|
||||
var selectHints *SelectHints
|
||||
if tcase.limit > 0 {
|
||||
labelHints = &LabelHints{
|
||||
Limit: tcase.limit,
|
||||
}
|
||||
selectHints = &SelectHints{
|
||||
Limit: tcase.limit,
|
||||
}
|
||||
}
|
||||
|
||||
t.Run(tcase.name, func(t *testing.T) {
|
||||
q := NewMergeQuerier(tcase.primaries, tcase.secondaries, func(s ...Series) Series { return s[0] })
|
||||
|
||||
t.Run("Select", func(t *testing.T) {
|
||||
res := q.Select(context.Background(), false, nil)
|
||||
res := q.Select(context.Background(), false, selectHints)
|
||||
var lbls []labels.Labels
|
||||
for res.Next() {
|
||||
lbls = append(lbls, res.At().Labels())
|
||||
|
@ -1577,7 +1633,7 @@ func TestMergeQuerierWithSecondaries_ErrorHandling(t *testing.T) {
|
|||
require.Equal(t, len(tcase.primaries)+len(tcase.secondaries), n)
|
||||
})
|
||||
t.Run("LabelNames", func(t *testing.T) {
|
||||
res, w, err := q.LabelNames(ctx, nil)
|
||||
res, w, err := q.LabelNames(ctx, labelHints)
|
||||
require.Subset(t, tcase.expectedWarnings, w)
|
||||
require.ErrorIs(t, err, tcase.expectedErrs[1], "expected error doesn't match")
|
||||
requireEqualSlice(t, tcase.expectedLabels, res)
|
||||
|
@ -1590,7 +1646,7 @@ func TestMergeQuerierWithSecondaries_ErrorHandling(t *testing.T) {
|
|||
})
|
||||
})
|
||||
t.Run("LabelValues", func(t *testing.T) {
|
||||
res, w, err := q.LabelValues(ctx, "test", nil)
|
||||
res, w, err := q.LabelValues(ctx, "test", labelHints)
|
||||
require.Subset(t, tcase.expectedWarnings, w)
|
||||
require.ErrorIs(t, err, tcase.expectedErrs[2], "expected error doesn't match")
|
||||
requireEqualSlice(t, tcase.expectedLabels, res)
|
||||
|
@ -1604,7 +1660,7 @@ func TestMergeQuerierWithSecondaries_ErrorHandling(t *testing.T) {
|
|||
})
|
||||
t.Run("LabelValuesWithMatchers", func(t *testing.T) {
|
||||
matcher := labels.MustNewMatcher(labels.MatchEqual, "otherLabel", "someValue")
|
||||
res, w, err := q.LabelValues(ctx, "test2", nil, matcher)
|
||||
res, w, err := q.LabelValues(ctx, "test2", labelHints, matcher)
|
||||
require.Subset(t, tcase.expectedWarnings, w)
|
||||
require.ErrorIs(t, err, tcase.expectedErrs[3], "expected error doesn't match")
|
||||
requireEqualSlice(t, tcase.expectedLabels, res)
|
||||
|
|
|
@ -831,7 +831,7 @@ func (c DefaultBlockPopulator) PopulateBlock(ctx context.Context, metrics *Compa
|
|||
if len(sets) > 1 {
|
||||
// Merge series using specified chunk series merger.
|
||||
// The default one is the compacting series merger.
|
||||
set = storage.NewMergeChunkSeriesSet(sets, mergeFunc)
|
||||
set = storage.NewMergeChunkSeriesSet(sets, 0, mergeFunc)
|
||||
}
|
||||
|
||||
// Iterate over all sorted chunk series.
|
||||
|
|
|
@ -2030,7 +2030,7 @@ func TestPopulateWithDelSeriesIterator_NextWithMinTime(t *testing.T) {
|
|||
// TODO(bwplotka): Merge with storage merged series set benchmark.
|
||||
func BenchmarkMergedSeriesSet(b *testing.B) {
|
||||
sel := func(sets []storage.SeriesSet) storage.SeriesSet {
|
||||
return storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge)
|
||||
return storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge)
|
||||
}
|
||||
|
||||
for _, k := range []int{
|
||||
|
|
12
ui-commits
Normal file
12
ui-commits
Normal file
|
@ -0,0 +1,12 @@
|
|||
dfec29d8e Fix border color for target pools with one target that is failing
|
||||
65743bf9b ui: drop template readme
|
||||
a7c1a951d Add general Mantine overrides CSS file
|
||||
0757fbbec Make sure that alert element table headers are not wrapped
|
||||
0180cf31a Factor out common icon and card styles
|
||||
50af7d589 Fix tree line drawing by using a callback ref
|
||||
ac01dc903 Explain, vector-to-vector: Do not compute results for set operators
|
||||
9b0dc68d0 PromQL explain view: Support set operators
|
||||
57898c792 Refactor and fix time formatting functions, add tests
|
||||
091fc403c Fiddle with targets table styles to try and improve things a bit
|
||||
a1908df92 Don't wrap action buttons below metric name in metrics explorer
|
||||
ac5377873 mantine UI: Distinguish between Not Ready and Stopping
|
|
@ -896,7 +896,7 @@ func (api *API) series(r *http.Request) (result apiFuncResult) {
|
|||
s := q.Select(ctx, true, hints, mset...)
|
||||
sets = append(sets, s)
|
||||
}
|
||||
set = storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge)
|
||||
set = storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge)
|
||||
} else {
|
||||
// At this point at least one match exists.
|
||||
set = q.Select(ctx, false, hints, matcherSets[0]...)
|
||||
|
|
|
@ -101,7 +101,7 @@ func (h *Handler) federation(w http.ResponseWriter, req *http.Request) {
|
|||
sets = append(sets, s)
|
||||
}
|
||||
|
||||
set := storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge)
|
||||
set := storage.NewMergeSeriesSet(sets, 0, storage.ChainedSeriesMerge)
|
||||
it := storage.NewBuffer(int64(h.lookbackDelta / 1e6))
|
||||
var chkIter chunkenc.Iterator
|
||||
Loop:
|
||||
|
|
|
@ -380,10 +380,11 @@ export const getUPlotOptions = (
|
|||
hooks: {
|
||||
setSelect: [
|
||||
(self: uPlot) => {
|
||||
onSelectRange(
|
||||
self.posToVal(self.select.left, "x"),
|
||||
self.posToVal(self.select.left + self.select.width, "x")
|
||||
);
|
||||
// Disallow sub-second zoom as this cause inconsistenices in the X axis in uPlot.
|
||||
const leftVal = self.posToVal(self.select.left, "x");
|
||||
const rightVal = Math.max(self.posToVal(self.select.left + self.select.width, "x"), leftVal + 1);
|
||||
|
||||
onSelectRange(leftVal, rightVal);
|
||||
},
|
||||
],
|
||||
},
|
||||
|
|
|
@ -1277,17 +1277,17 @@ const funcDocs: Record<string, React.ReactNode> = {
|
|||
</p>
|
||||
</>
|
||||
),
|
||||
holt_winters: (
|
||||
double_exponential_smoothing: (
|
||||
<>
|
||||
<p>
|
||||
<code>holt_winters(v range-vector, sf scalar, tf scalar)</code> produces a smoothed value for time series based on
|
||||
<code>double_exponential_smoothing(v range-vector, sf scalar, tf scalar)</code> produces a smoothed value for time series based on
|
||||
the range in <code>v</code>. The lower the smoothing factor <code>sf</code>, the more importance is given to old
|
||||
data. The higher the trend factor <code>tf</code>, the more trends in the data is considered. Both <code>sf</code>{' '}
|
||||
and <code>tf</code> must be between 0 and 1.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<code>holt_winters</code> should only be used with gauges.
|
||||
<code>double_exponential_smoothing</code> should only be used with gauges.
|
||||
</p>
|
||||
</>
|
||||
),
|
||||
|
|
|
@ -17,7 +17,7 @@ export const functionArgNames: Record<string, string[]> = {
|
|||
// exp: [],
|
||||
// floor: [],
|
||||
histogram_quantile: ['target quantile', 'histogram'],
|
||||
holt_winters: ['input series', 'smoothing factor', 'trend factor'],
|
||||
double_exponential_smoothing: ['input series', 'smoothing factor', 'trend factor'],
|
||||
hour: ['timestamp (default = vector(time()))'],
|
||||
// idelta: [],
|
||||
// increase: [],
|
||||
|
@ -68,7 +68,7 @@ export const functionDescriptions: Record<string, string> = {
|
|||
exp: 'calculate exponential function for input vector values',
|
||||
floor: 'round down values of input series to nearest integer',
|
||||
histogram_quantile: 'calculate quantiles from histogram buckets',
|
||||
holt_winters: 'calculate smoothed value of input series',
|
||||
double_exponential_smoothing: 'calculate smoothed value of input series',
|
||||
hour: 'return the hour of the day for provided timestamps',
|
||||
idelta: 'calculate the difference between the last two samples of a range vector (for counters)',
|
||||
increase: 'calculate the increase in value over a range of time (for counters)',
|
||||
|
|
|
@ -60,8 +60,8 @@ export const functionSignatures: Record<string, Func> = {
|
|||
histogram_stddev: { name: 'histogram_stddev', argTypes: [valueType.vector], variadic: 0, returnType: valueType.vector },
|
||||
histogram_stdvar: { name: 'histogram_stdvar', argTypes: [valueType.vector], variadic: 0, returnType: valueType.vector },
|
||||
histogram_sum: { name: 'histogram_sum', argTypes: [valueType.vector], variadic: 0, returnType: valueType.vector },
|
||||
holt_winters: {
|
||||
name: 'holt_winters',
|
||||
double_exponential_smoothing: {
|
||||
name: 'double_exponential_smoothing',
|
||||
argTypes: [valueType.matrix, valueType.scalar, valueType.scalar],
|
||||
variadic: 0,
|
||||
returnType: valueType.vector,
|
||||
|
|
|
@ -583,12 +583,42 @@ describe('analyzeCompletion test', () => {
|
|||
pos: 5,
|
||||
expectedContext: [{ kind: ContextKind.AtModifiers }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete topk params',
|
||||
expr: 'topk()',
|
||||
pos: 5,
|
||||
expectedContext: [{ kind: ContextKind.Number }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete topk params 2',
|
||||
expr: 'topk(inf,)',
|
||||
pos: 9,
|
||||
expectedContext: [{ kind: ContextKind.MetricName, metricName: '' }, { kind: ContextKind.Function }, { kind: ContextKind.Aggregation }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete topk params 3',
|
||||
expr: 'topk(inf,r)',
|
||||
pos: 10,
|
||||
expectedContext: [{ kind: ContextKind.MetricName, metricName: 'r' }, { kind: ContextKind.Function }, { kind: ContextKind.Aggregation }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete topk params 4',
|
||||
expr: 'topk by(instance) ()',
|
||||
pos: 19,
|
||||
expectedContext: [{ kind: ContextKind.Number }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete topk params 5',
|
||||
expr: 'topk by(instance) (inf,r)',
|
||||
pos: 24,
|
||||
expectedContext: [{ kind: ContextKind.MetricName, metricName: 'r' }, { kind: ContextKind.Function }, { kind: ContextKind.Aggregation }],
|
||||
},
|
||||
];
|
||||
testCases.forEach((value) => {
|
||||
it(value.title, () => {
|
||||
const state = createEditorState(value.expr);
|
||||
const node = syntaxTree(state).resolve(value.pos, -1);
|
||||
const result = analyzeCompletion(state, node);
|
||||
const result = analyzeCompletion(state, node, value.pos);
|
||||
expect(value.expectedContext).toEqual(result);
|
||||
});
|
||||
});
|
||||
|
|
|
@ -54,6 +54,12 @@ import {
|
|||
QuotedLabelName,
|
||||
NumberDurationLiteralInDurationContext,
|
||||
NumberDurationLiteral,
|
||||
AggregateOp,
|
||||
Topk,
|
||||
Bottomk,
|
||||
LimitK,
|
||||
LimitRatio,
|
||||
CountValues,
|
||||
} from '@prometheus-io/lezer-promql';
|
||||
import { Completion, CompletionContext, CompletionResult } from '@codemirror/autocomplete';
|
||||
import { EditorState } from '@codemirror/state';
|
||||
|
@ -185,7 +191,7 @@ export function computeStartCompletePosition(state: EditorState, node: SyntaxNod
|
|||
if (node.type.id === LabelMatchers || node.type.id === GroupingLabels) {
|
||||
start = computeStartCompleteLabelPositionInLabelMatcherOrInGroupingLabel(node, pos);
|
||||
} else if (
|
||||
node.type.id === FunctionCallBody ||
|
||||
(node.type.id === FunctionCallBody && node.firstChild === null) ||
|
||||
(node.type.id === StringLiteral && (node.parent?.type.id === UnquotedLabelMatcher || node.parent?.type.id === QuotedLabelMatcher))
|
||||
) {
|
||||
// When the cursor is between bracket, quote, we need to increment the starting position to avoid to consider the open bracket/ first string.
|
||||
|
@ -198,6 +204,7 @@ export function computeStartCompletePosition(state: EditorState, node: SyntaxNod
|
|||
// So we have to analyze the string about the current node to see if the duration unit is already present or not.
|
||||
(node.type.id === NumberDurationLiteralInDurationContext && !durationTerms.map((v) => v.label).includes(currentText[currentText.length - 1])) ||
|
||||
(node.type.id === NumberDurationLiteral && node.parent?.type.id === 0 && node.parent.parent?.type.id === SubqueryExpr) ||
|
||||
(node.type.id === FunctionCallBody && isAggregatorWithParam(node) && node.firstChild !== null) ||
|
||||
(node.type.id === 0 &&
|
||||
(node.parent?.type.id === OffsetExpr ||
|
||||
node.parent?.type.id === MatrixSelector ||
|
||||
|
@ -208,10 +215,21 @@ export function computeStartCompletePosition(state: EditorState, node: SyntaxNod
|
|||
return start;
|
||||
}
|
||||
|
||||
function isAggregatorWithParam(functionCallBody: SyntaxNode): boolean {
|
||||
const parent = functionCallBody.parent;
|
||||
if (parent !== null && parent.firstChild?.type.id === AggregateOp) {
|
||||
const aggregationOpType = parent.firstChild.firstChild;
|
||||
if (aggregationOpType !== null && [Topk, Bottomk, LimitK, LimitRatio, CountValues].includes(aggregationOpType.type.id)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// analyzeCompletion is going to determinate what should be autocompleted.
|
||||
// The value of the autocompletion is then calculate by the function buildCompletion.
|
||||
// Note: this method is exported for testing purpose only. Do not use it directly.
|
||||
export function analyzeCompletion(state: EditorState, node: SyntaxNode): Context[] {
|
||||
export function analyzeCompletion(state: EditorState, node: SyntaxNode, pos: number): Context[] {
|
||||
const result: Context[] = [];
|
||||
switch (node.type.id) {
|
||||
case 0: // 0 is the id of the error node
|
||||
|
@ -330,7 +348,7 @@ export function analyzeCompletion(state: EditorState, node: SyntaxNode): Context
|
|||
}
|
||||
// now we have to know if we have two Expr in the direct children of the `parent`
|
||||
const containExprTwice = containsChild(parent, 'Expr', 'Expr');
|
||||
if (containExprTwice) {
|
||||
if (containExprTwice && parent.type.id !== FunctionCallBody) {
|
||||
if (parent.type.id === BinaryExpr && !containsAtLeastOneChild(parent, 0)) {
|
||||
// We are likely in the case 1 or 5
|
||||
result.push(
|
||||
|
@ -460,7 +478,23 @@ export function analyzeCompletion(state: EditorState, node: SyntaxNode): Context
|
|||
result.push({ kind: ContextKind.Duration });
|
||||
break;
|
||||
case FunctionCallBody:
|
||||
// In this case we are in the given situation:
|
||||
// For aggregation function such as Topk, the first parameter is a number.
|
||||
// The second one is an expression.
|
||||
// When moving to the second parameter, the node is an error node.
|
||||
// Unfortunately, as a current node, codemirror doesn't give us the error node but instead the FunctionCallBody
|
||||
// The tree looks like that: PromQL(AggregateExpr(AggregateOp(Topk),FunctionCallBody(NumberDurationLiteral,⚠)))
|
||||
// So, we need to figure out if the cursor is on the first parameter or in the second.
|
||||
if (isAggregatorWithParam(node)) {
|
||||
if (node.firstChild === null || (node.firstChild.from <= pos && node.firstChild.to >= pos)) {
|
||||
// it means the FunctionCallBody has no child, which means we are autocompleting the first parameter
|
||||
result.push({ kind: ContextKind.Number });
|
||||
break;
|
||||
}
|
||||
// at this point we are necessary autocompleting the second parameter
|
||||
result.push({ kind: ContextKind.MetricName, metricName: '' }, { kind: ContextKind.Function }, { kind: ContextKind.Aggregation });
|
||||
break;
|
||||
}
|
||||
// In all other cases, we are in the given situation:
|
||||
// sum() or in rate()
|
||||
// with the cursor between the bracket. So we can autocomplete the metric, the function and the aggregation.
|
||||
result.push({ kind: ContextKind.MetricName, metricName: '' }, { kind: ContextKind.Function }, { kind: ContextKind.Aggregation });
|
||||
|
@ -516,7 +550,11 @@ export class HybridComplete implements CompleteStrategy {
|
|||
promQL(context: CompletionContext): Promise<CompletionResult | null> | CompletionResult | null {
|
||||
const { state, pos } = context;
|
||||
const tree = syntaxTree(state).resolve(pos, -1);
|
||||
const contexts = analyzeCompletion(state, tree);
|
||||
// The lines above can help you to print the current lezer tree.
|
||||
// It's useful when you are trying to understand why it doesn't autocomplete.
|
||||
// console.log(syntaxTree(state).topNode.toString());
|
||||
// console.log(`current node: ${tree.type.name}`);
|
||||
const contexts = analyzeCompletion(state, tree, pos);
|
||||
let asyncResult: Promise<Completion[]> = Promise.resolve([]);
|
||||
let completeSnippet = false;
|
||||
let span = true;
|
||||
|
|
|
@ -258,7 +258,7 @@ export const functionIdentifierTerms = [
|
|||
type: 'function',
|
||||
},
|
||||
{
|
||||
label: 'holt_winters',
|
||||
label: 'double_exponential_smoothing',
|
||||
detail: 'function',
|
||||
info: 'Calculate smoothed value of input series',
|
||||
type: 'function',
|
||||
|
|
|
@ -46,7 +46,7 @@ import {
|
|||
HistogramStdDev,
|
||||
HistogramStdVar,
|
||||
HistogramSum,
|
||||
HoltWinters,
|
||||
DoubleExponentialSmoothing,
|
||||
Hour,
|
||||
Idelta,
|
||||
Increase,
|
||||
|
@ -312,8 +312,8 @@ const promqlFunctions: { [key: number]: PromQLFunction } = {
|
|||
variadic: 0,
|
||||
returnType: ValueType.vector,
|
||||
},
|
||||
[HoltWinters]: {
|
||||
name: 'holt_winters',
|
||||
[DoubleExponentialSmoothing]: {
|
||||
name: 'double_exponential_smoothing',
|
||||
argTypes: [ValueType.matrix, ValueType.scalar, ValueType.scalar],
|
||||
variadic: 0,
|
||||
returnType: ValueType.vector,
|
||||
|
|
|
@ -20,7 +20,7 @@ export const promQLHighLight = styleTags({
|
|||
NumberDurationLiteral: tags.number,
|
||||
NumberDurationLiteralInDurationContext: tags.number,
|
||||
Identifier: tags.variableName,
|
||||
'Abs Absent AbsentOverTime Acos Acosh Asin Asinh Atan Atanh AvgOverTime Ceil Changes Clamp ClampMax ClampMin Cos Cosh CountOverTime DaysInMonth DayOfMonth DayOfWeek DayOfYear Deg Delta Deriv Exp Floor HistogramAvg HistogramCount HistogramFraction HistogramQuantile HistogramSum HoltWinters Hour Idelta Increase Irate LabelReplace LabelJoin LastOverTime Ln Log10 Log2 MaxOverTime MinOverTime Minute Month Pi PredictLinear PresentOverTime QuantileOverTime Rad Rate Resets Round Scalar Sgn Sin Sinh Sort SortDesc SortByLabel SortByLabelDesc Sqrt StddevOverTime StdvarOverTime SumOverTime Tan Tanh Time Timestamp Vector Year':
|
||||
'Abs Absent AbsentOverTime Acos Acosh Asin Asinh Atan Atanh AvgOverTime Ceil Changes Clamp ClampMax ClampMin Cos Cosh CountOverTime DaysInMonth DayOfMonth DayOfWeek DayOfYear Deg Delta Deriv Exp Floor HistogramAvg HistogramCount HistogramFraction HistogramQuantile HistogramSum DoubleExponentialSmoothing Hour Idelta Increase Irate LabelReplace LabelJoin LastOverTime Ln Log10 Log2 MaxOverTime MinOverTime Minute Month Pi PredictLinear PresentOverTime QuantileOverTime Rad Rate Resets Round Scalar Sgn Sin Sinh Sort SortDesc SortByLabel SortByLabelDesc Sqrt StddevOverTime StdvarOverTime SumOverTime Tan Tanh Time Timestamp Vector Year':
|
||||
tags.function(tags.variableName),
|
||||
'Avg Bottomk Count Count_values Group LimitK LimitRatio Max Min Quantile Stddev Stdvar Sum Topk': tags.operatorKeyword,
|
||||
'By Without Bool On Ignoring GroupLeft GroupRight Offset Start End': tags.modifier,
|
||||
|
|
|
@ -141,7 +141,7 @@ FunctionIdentifier {
|
|||
HistogramStdVar |
|
||||
HistogramSum |
|
||||
HistogramAvg |
|
||||
HoltWinters |
|
||||
DoubleExponentialSmoothing |
|
||||
Hour |
|
||||
Idelta |
|
||||
Increase |
|
||||
|
@ -388,7 +388,7 @@ NumberDurationLiteralInDurationContext {
|
|||
HistogramStdDev { condFn<"histogram_stddev"> }
|
||||
HistogramStdVar { condFn<"histogram_stdvar"> }
|
||||
HistogramSum { condFn<"histogram_sum"> }
|
||||
HoltWinters { condFn<"holt_winters"> }
|
||||
DoubleExponentialSmoothing { condFn<"double_exponential_smoothing"> }
|
||||
Hour { condFn<"hour"> }
|
||||
Idelta { condFn<"idelta"> }
|
||||
Increase { condFn<"increase"> }
|
||||
|
|
Loading…
Reference in a new issue