This can give a more precise result, by keeping a separate running
compensation value to accumulate small errors.
See https://en.wikipedia.org/wiki/Kahan_summation_algorithm
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
When the label name of a matcher contains non-standard characters, like
a dot, or starts with a digit, it should be quoted.
If it's not quoted, then `VectorSelector.String()` isn't a valid PromQL.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* process custom values in histogram unit test framework
* check for warnings when evaluating in unit test framework
* add test cases for custom buckets in test framework
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
The function `rangeEvalTimestampFunctionOverVectorSelector` appeared to be checking histogram size, however the value it used was always 0 due to subtle variable shadowing.
However we don't need to pass sample values to the `timestamp` function, since the latter only cares about timestamps. This also affects peak sample count in statistics, since we are no longer copying histogram samples.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
The check fell into "this matcher equals vector selector's name" case when vector selector doesn't have a name and the matcher is an explicit matcher for an empty __name__ label.
To provide some context about why this is important: some downstream projects use the promql.Parse(expr.String()) to clone an expression's AST, and with this bug that matcher disappears in the cloning.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* modify unit test framework to automatically generate native histograms with custom buckets from classic histogram series
* add very basic tests for classic histogram converted into native histogram with custom bounds
* fix histogram_quantile for native histograms with custom buckets
* make loading with nhcb explicit
* evaluate native histograms with custom buckets on queries with explicit keyword
* use regex replacer
* use temp histogram struct for automatically loading converted nhcb
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
In a previous PR, the generated parser was created using an old version of goyacc.
Also adds -l to disable line directives, which fixes debug processing and reduces diffs at the expense of making it more difficult to reason about the generated output.
Signed-off-by: Owen Williams <owen.williams@grafana.com>
includes Inf and NaN as numbers to histogram
---------
Signed-off-by: Neeraj Gartia <neerajgartia211002@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
This saves memory in other kinds of aggregation.
We don't need `orderedResult` in `aggregationCountValues`; the ordering
is not guaranteed.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
They aggregate results in different ways.
topk/bottomk don't consider histograms so can simplify data collection.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is a cleaner split of responsibilities.
We now check the sample count after calling rangeEvalAgg.
Changed re-use of samples to use `Clone` and `defer`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Pass it as a float64 not as interface{}.
Make k a simple int, since that is the parameter to make().
Pull invalid quantile warning out of the loop.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The new function `rangeEvalAgg` is mostly a copy of `rangeEval`, but
without `initSeries` which we don't need and inlining the callback to
`aggregation()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The existing aggregation function is very long and covers very different
cases.
`aggregationCountValues` is just for `count_values`, which differs from
other aggregations in that it outputs as many series per group as there
are values in the input.
Remove the top-level switch on string parameter type; use the same `Op`
check there as elswehere.
Pull checking parameters out to caller, where it is only executed once.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The strings produced by these tests can run to thousands of characters,
which makes test logs difficult to read.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* promql: include more details in error message when creating test query fails
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Include more details when an unexpected metric is returned
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
The definition of histograms in the test framework may create
histograms in a non-compact form. Since histogram comparison relies on
exact equality of the bucket layout, we have to compact the histograms
created by the test framework language before comparing them to
histograms returned from the PromQL engine.
Signed-off-by: beorn7 <beorn@grafana.com>
The size of histogram points are now bigger by 24 bytes due to the
custom values slice.
When histograms are loaded into partial results in vector selectors
we use HPoint type where the size is calculated as
(size of histogram + 8 for timestamp)/16.
a3d1a46eda/promql/value.go (L176)
When histograms are put into Sample type in range evaluations, the
Sample has more overhead and the size is calculated differently:
(size of histogram / 16) + 1 for time stamp.
a3d1a46eda/promql/engine.go (L1928)
When the size of the histogram is 16k, then the first calculation gives k
but the second gives k+1 for the sample count.
If the histogram size is 16k+8, then both would give k+1.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Restrict the capacity of first argument to `append()` to force an allocation.
This is for the slice implementation only.
Signed-off-by: Domantas Jadenkus <djadenkus@gmail.com>
* Extract method to make it easier to test.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Remove superfluous interface definition.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add test cases for existing instant query functionality.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add support for testing range queries
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Expand test coverage for instant queries and clarify error when a float is returned but a histogram is expected (or vice versa)
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Improve error message formatting
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add test case for instant query command with invalid timestamp
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Fix linting warning.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Remove superfluous print statement and expected result
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Fix linting warning.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add note about ordered range eval commands.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Check that matrix results are always sorted by labels.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
Using testify outside of unit tests results in panics rather than a
useful error for the user.
Fixes#13703
Signed-off-by: David Leadbeater <dgl@dgl.cx>
This is a bit tough to explain, but I'll try:
`rate` & friends have a sophisticated extrapolation algorithm.
Usually, we extrapolate the result to the total interval specified in
the range selector. However, if the first sample within the range is
too far away from the beginning of the interval, or if the last sample
within the range is too far away from the end of the interval, we
assume the series has just started half a sampling interval before the
first sample or after the last sample, respectively, and shorten the
extrapolation interval correspondingly. We calculate the sampling
interval by looking at the average time between samples within the
range, and we define "too far away" as "more than 110% of that
sampling interval".
However, if this algorithm leads to an extrapolated starting value
that is negative, we limit the start of the extrapolation interval to
the point where the extrapolated starting value is zero.
At least that was the intention.
What we actually implemented is the following: If extrapolating all
the way to the beginning of the total interval would lead to an
extrapolated negative value, we would only extrapolate to the zero
point as above, even if the algorithm above would have selected a
starting point that is just half a sampling interval before the first
sample and that starting point would not have an extrapolated negative
value. In other word: What was meant as a _limitation_ of the
extrapolation interval yielded a _longer_ extrapolation interval in
this case.
There is an exception to the case just described: If the increase of
the extrapolation interval is more than 110% of the sampling interval,
we suddenly drop back to only extrapolate to half a sampling interval.
This behavior can be nicely seen in the testcounter_zero_cutoff test,
where the rate goes up all the way to 0.7 and then jumps back to 0.6.
This commit changes the behavior to what was (presumably) intended
from the beginning: The extension of the extrapolation interval is
only limited if actually needed to prevent extrapolation to negative
values, but the "limitation" never leads to _more_ extrapolation
anymore.
The difference is subtle, and probably it never bothered anyone.
However, if you calculate a rate of a classic histograms, the old
behavior might create non-monotonic histograms as a result (because of
the jumps you can see nicely in the old version of the
testcounter_zero_cutoff test). With this fix, that doesn't happen
anymore.
Signed-off-by: beorn7 <beorn@grafana.com>
* add custom buckets to native histogram model
* simple copy for custom bounds
* return errors for unsupported add/sub operations
* add test cases for string and update appendhistogram in scrape to account for new schema
* check fields which are supposed to be unused but may affect results in equals
* allow appending custom buckets histograms regardless of max schema
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Aggregations discard the metric name, so don't try to
include it in the error message.
Add a test that generates this warning.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This adds support for the new grammar of `{"metric_name", "l1"="val"}` to promql and some of the exposition formats.
This grammar will also be valid for non-UTF-8 names.
UTF-8 names will not be considered valid unless model.NameValidationScheme is changed.
This does not update the go expfmt parser in text_parse.go, which will be addressed by https://github.com/prometheus/common/issues/554/.
Part of https://github.com/prometheus/prometheus/issues/13095
Signed-off-by: Owen Williams <owen.williams@grafana.com>
Fixes#11708.
If a range vector is fixen in time with the @ modifier, it gets still
moved around for different steps in a range query. Since no additional
points are retrieved from the TSDB, this leads to steadily emptying
the range, leading to the weird behavior described in isse #11708.
This only happens for functions listed in `AtModifierUnsafeFunctions`,
and the only of those that takes a range vector is `predict_linear`,
which is the reason why we see it only for this particular function.
Signed-off-by: beorn7 <beorn@grafana.com>