This is to avoid copying the many fields of a histogram.Histogram all
the time.
This also fixes a bunch of formerly broken tests.
Signed-off-by: beorn7 <beorn@grafana.com>
* Add test case to showcase the problem in #9590
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
* Don't unwrap ParenExpr in newStepInvariantExpr
Fixes#9590
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
This creates a new `model` directory and moves all data-model related
packages over there:
exemplar labels relabel rulefmt textparse timestamp value
All the others are more or less utilities and have been moved to `util`:
gate logging modetimevfs pool runtime
Signed-off-by: beorn7 <beorn@grafana.com>
* promql: copy data when short-circuiting
Because the range query loop re-uses the output buffer each time round,
we must copy results into the buffer rather than using input as output.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add benchmark case for many-to-one join
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* query_range: compute join signatures just once
For an expression like `a + on(p,q) b`, extract the `p,q` part from each
series once, instead of re-computing at every step of the range.
Although there was a cache, computing the key by concatenating all
labels was expensive.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add benchmark for query_range with topk
Modify sample data so values within a metric differ
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Optimise topk where k==1
In this case we don't need a heap to keep track of values; just a single
slot is fine.
Simplify the initialization of the heap: since all cases start off as a
single-item heap we can just assign the value directly.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Allow at least one slot in results for topk, quantile
k isn't set for quantile, but we need space to start collecting values
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In the previous version, 1.18.0, the "megacheck" linter paid attention
to the '//lint:ignore' comment, but that is no longer there.
Newer version pay attention to '//nolint:<linter>,<linter>,...'
comments, optionally followed by a "second" comment introduced by '//'.
Update the directives to use this style.
This is related to prometheus/blackbox_exporter#738 and
prometheus/blackbox_exporter#745.
Signed-off-by: Marcelo E. Magallon <marcelo.magallon@grafana.com>
This commit adds `@ <timestamp>` modifier as per this design doc: https://docs.google.com/document/d/1uSbD3T2beM-iX4-Hp7V074bzBRiRNlqUdcWP6JTDQSs/edit.
An example query:
```
rate(process_cpu_seconds_total[1m])
and
topk(7, rate(process_cpu_seconds_total[1h] @ 1234))
```
which ranks based on last 1h rate and w.r.t. unix timestamp 1234 but actually plots the 1m rate.
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* promql: Removed global and add ability to have better interval for subqueries if not specified
## Changes
* Refactored tests for better hints testing
* Added various TODO in places to enhance.
* Moved DefaultEvalInterval global to opts with func(rangeMillis int64) int64 function instead
Motivation: At Thanos we would love to have better control over the subqueries step/interval.
This is important to choose proper resolution. I think having proper step also does not harm for
Prometheus and remote read users. Especially on stateless querier we do not know evaluation interval
and in fact putting global can be wrong to assume for Prometheus even.
I think ideally we could try to have at least 3 samples within the range, the same
way Prometheus UI and Grafana assumes.
Anyway this interfaces allows to decide on promQL user basis.
Open question: Is taking parent interval a smart move?
Motivation for removing global: I spent 1h fighting with:
=== RUN TestEvaluations
TestEvaluations: promql_test.go:31: unexpected error: error evaluating query "absent_over_time(rate(nonexistant[5m])[5m:])" (line 687): unexpected error: runtime error: integer divide by zero
--- FAIL: TestEvaluations (0.32s)
FAIL
At the end I found that this fails on most of the versions including this master if you run this test alone. If run together with many
other tests it passes. This is due to SetDefaultEvaluationInterval(1 * time.Minute)
in test that is ran before TestEvaluations. Thanks to globals (:
Let's fix it by dropping this global.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added issue links for TODOs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Removed irrelevant changes.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Add errors and Warnings to SeriesSet
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Change Querier interface and refactor accordingly
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor promql/engine to propagate warnings at eval stage
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Make sure all the series from all Selects are pre-advanced
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Separate merge series sets
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Clean
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor merge querier failure handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.
* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix formatting
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix CI issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Added final tests for error handling.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.
NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Utilize errWithWarnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix recently introduced expansion issue
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add tests for secondary querier error handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Implement lazy merge
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add name to test cases
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Reorganize
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Remove redundant warnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix rebase mistake
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Use go1.14 new hash/maphash to hash both RHS and LHS instead of XOR'ing
which has been resulting in hash collisions.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Refactor engine labelset signature generation, just use labels.Labels
instead of hashes.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments; function comments + store result of
lhs.String+rhs.String as key.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Replace all signatureFunc usage with signatureFuncString.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Make optimizations to labels String function and generation of rhs+lhs
as string in resultMetric.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Use separate string functions that don't use strconv just for engine
maps.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Use a byte invalid separator instead of quoting and have a buffer
attached to EvalNodeHelper instead of using a global pool in the labels
package.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address more review comments, labels has a function that now builds a
byte slice without turning it into a string.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Use two different non-ascii hex codes as byte separators between labels
and between sets of labels when building bytes of a Labels struct.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* We only need the 2nd byte invalid sep. at the beginning of a
labels.Bytes
Signed-off-by: Callum Styan <callumstyan@gmail.com>
This addresses fabxc's TODO.
More importantly, it now properly defers the
querier.Close(). Previously, if a panic happened after creation of the
querier within the populateSeries function, querier.Close() was never called.
The latter was responsible for #6977.
Signed-off-by: beorn7 <beorn@grafana.com>
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.
I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.
Also I don't know what TestSelectSorted was testing, but now it's testing sorting.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Make lookbackDelta a option of QueryEngine
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* julius' suggestion
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* remove trivial getter
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Assume lookback delta is always > 0
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* add debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* don't expose loopback delta
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Specify that lookack delta is also used in federation
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix federation test
While we have added some logic to the promql engine to keep it backwards
compatible and have a 5 minute loopback by default, the web/ package is
likely to really be internal to Prometheus and we should not add the
same kind of heuritstics here.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* loopback delta: Fix debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Since we use ActiveQueryTracker to check for concurrency in
d992c36b3a it does not make sense to keep
the MaxConcurrent value as an option of the PromQL engine.
This pull request removes it from the PromQL engine options, sets the
max concurrent metric to -1 if there is no active query tracker, and use
the value of the active query tracker otherwise.
It removes dead code and also will inform people who import the promql
package that we made that change, as it breaks the EngineOpts struct.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Move check for empty VectorSelector to typeChecking
* Move check for twice set metric name to typeChecking
* Make child of MatrixSelector a general Node
* rename checkType to checkAST
* Rename fail to addParseErr
* Remove trailing whitespace
Signed-off-by: Tobias Guggenmos <tguggenm@redhat.com>
For yacc generated parsers there is the convention to capitalize the names of item types provided by the lexer, which makes it easy to distinct lexer tokens (capitalized) from nonterminal symbols (not capitalized) in language grammars.
This convention is also followed by the (non generated) go compiler (see https://golang.org/pkg/go/token/#Token).
Part of the parser rewrite described in #6256.
Signed-off-by: Tobias Guggenmos <tguggenm@redhat.com>
* Added query logging for prometheus.
Options added:
1) active.queries.filepath: Filename where queries will be recorded
2) active.queries.filesize: Size of the file where queries will be recorded.
Functionality added:
All active queries are now logged in a file. If prometheus crashes unexpectedly, these queries are also printed out on stdout in the rerun.
Queries are written concurrently to an mmaped file, and removed once they are done. Their positions in the file are reused. They are written in json format. However, due to dynamic nature of application, the json has an extra comma after the last query, and is missing an ending ']'. There may also null bytes in the tail of file.
Signed-off-by: Advait Bhatwadekar <advait123@ymail.com>
With the next release of client_golang, Summaries will not have
objectives by default. To not lose the objectives we have right now,
explicitly state the current default objectives.
Signed-off-by: beorn7 <beorn@grafana.com>
For my benchmarks on aggregation this reduces allocations by ~5% (~10%
time improvement):
```
benchmark old ns/op new ns/op delta
BenchmarkEvaluations/benchdata/aggregators.test/promxy-4 727692 649626 -10.73%
benchmark old allocs new allocs delta
BenchmarkEvaluations/benchdata/aggregators.test/promxy-4 2566 2434 -5.14%
benchmark old bytes new bytes delta
BenchmarkEvaluations/benchdata/aggregators.test/promxy-4 162760 148854 -8.54%
```
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
The documentation for Context states that this is just as good:
// If Done is not yet closed, Err returns nil.
// If Done is closed, Err returns a non-nil error
Signed-off-by: Bryan Boreham <bryan@weave.works>
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors
Signed-off-by: tariqibrahim <tariq181290@gmail.com>
* Expose lexer item types
We have generally agreed to expose AST types / values that are necessary
to make sense of the AST outside of the promql package. Currently the
`UnaryExpr`, `BinaryExpr`, and `AggregateExpr` AST nodes store the lexer
item type to indicate the operator type, but since the individual item
types aren't exposed, an external user of the package cannot determine
the operator type. So this PR exposes them.
Although not all item types are required to make sense of the AST (some
are really only used in the lexer), I decided to expose them all here to
be somewhat more consistent. Another option would be to not use lexer
item types at all in AST nodes.
The concrete motivation is my work on the PromQL->Flux transpiler, but
this ought to be useful for other cases as well.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Fix item type names in tests
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* *: use latest release of staticcheck
It also fixes a couple of things in the code flagged by the additional
checks.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Use official release of staticcheck
Also run 'go list' before staticcheck to avoid failures when downloading packages.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
Since it is used in a type assertion, having it as an alias to the
error interface is the same as saying 'error', i.e. it succeeds for
all types of error. Change to a struct which is a concrete type and
the type assertion will only succeed if the type is identical.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* use Welford/Knuth method to compute standard deviation and variance, avoids float precision issues
* use better method for calculating avg and avg_over_time
Signed-off-by: Dan Cech <dcech@grafana.com>
* Allow for BufferedSeriesIterator instances to be created without an underlying iterator, to simplify their usage.
Signed-off-by: Alin Sinpalean <alin.sinpalean@gmail.com>
* Add Start/End to SelectParams
* Make remote read use the new selectParams for start/end
This commit will continue sending the start/end time of the remote read
query as the overarching promql time and the specific range of data that
the query is intersted in receiving a response to is now part of the
ReadHints (upstream discussion in #4226).
* Remove unused vendored code
The genproto.sh script was updated, but the code wasn't regenerated.
This simply removes the vendored deps that are no longer part of the
codegen output.
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
Right now promql won't time out a request if populating the iterators
takes a long time.
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
Fixes#4289
This changes the Walk/Inspect API inside the promql package to bubble
up errors. This is done by having the inspector return an error (instead
of a bool) and then bubbling that up in the Walk. This way if any error
is encountered in the Walk() the walk will stop and return the error.
This avoids issues where errors from the Querier where being ignored
(causing incorrect promql evaluation).
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
Fixes#4136
* Move range logic to 'eval'
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make aggregegate range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* PromQL is statically typed, so don't eval to find the type.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Extend rangewrapper to multiple exprs
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Start making function evaluation ranged
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make instant queries a special case of range queries
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Eliminate evalString
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Evaluate range vector functions one series at a time
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make unary operators range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make binops range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Pass time to range-aware functions.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make simple _over_time functions range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reduce allocs when working with matrix selectors
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add basic benchmark for range evaluation
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reuse objects for function arguments
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Do dropmetricname and allocating output vector only once.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add range-aware support for range vector functions with params
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Optimise holt_winters, cut cpu and allocs by ~25%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make rate&friends range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make more functions range aware. Document calling convention.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make date functions range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make simple math functions range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Convert more functions to be range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make more functions range aware
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Specialcase timestamp() with vector selector arg for range awareness
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove transition code for functions
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove the rest of the engine transition code
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove more obselete code
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove the last uses of the eval* functions
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove engine finalizers to prevent corruption
The finalizers set by matrixSelector were being called
just before the value they were retruning to the pool
was then being provided to the caller. Thus a concurrent query
could corrupt the data that the user has just been returned.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add new benchmark suite for range functinos
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Migrate existing benchmarks to new system
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Expand promql benchmarks
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Simply test by removing unused range code
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* When testing instant queries, check range queries too.
To protect against subsequent steps in a range query being
affected by the previous steps, add a test that evaluates
an instant query that we know works again as a range query
with the tiimestamp we care about not being the first step.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reuse ring for matrix iters. Put query results back in pool.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reuse buffer when iterating over matrix selectors
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Unary minus should remove metric name
Cut down benchmarks for faster runs.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reduce repetition in benchmark test cases
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Work series by series when doing normal vectorSelectors
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Optimise benchmark setup, cuts time by 60%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Have rangeWrapper use an evalNodeHelper to cache across steps
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Use evalNodeHelper with functions
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Cache dropMetricName within a node evaluation.
This saves both the calculations and allocs done by dropMetricName
across steps.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reuse input vectors in rangewrapper
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reuse the point slices in the matrixes input/output by rangeWrapper
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make benchmark setup faster using AddFast
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Simplify benchmark code.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add caching in VectorBinop
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Use xor to have one-level resultMetric hash key
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add more benchmarks
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Call Query.Close in apiv1
This allows point slices allocated for the response data
to be reused by later queries, saving allocations.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Optimise histogram_quantile
It's now 5-10% faster with 97% less garbage generated for 1k steps
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make the input collection in rangeVector linear rather than quadratic
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Optimise label_replace, for 1k steps 15x fewer allocs and 3x faster
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Optimise label_join, 1.8x faster and 11x less memory for 1k steps
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Expand benchmarks, cleanup comments, simplify numSteps logic.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Address Fabian's comments
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Comments from Alin.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Address jrv's comments
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Remove dead code
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Address Simon's comments.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Rename populateIterators, pre-init some sizes
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Handle case where function has non-matrix args first
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Split rangeWrapper out to rangeEval function, improve comments
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Cleanup and make things more consistent
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Make EvalNodeHelper public
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Fabian's comments.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
This attempts to close#3973.
Handles cases where the length of the input vector to an aggregate topk
/ bottomk function is less than the K paramater. The change updates
Prometheus to allocate a result vector the same length as the input
vector in these cases.
Previously Prometheus would out-of-memory panic for large K values. This
change makes that unlikely unless the size of the input vector is
equally large.
Signed-off-by: David King <dave@davbo.org>
This adds a parameter to the storage selection interface which allows
query engine(s) to pass information about the operations surrounding a
data selection.
This can for example be used by remote storage backends to infer the
correct downsampling aggregates that need to be provided.
* TotalFoo suggested a comprehensive timing, but TotalEvalTime was part
of the Exec timings, together with Queue timings
* The other option was to rename ExecTotalTime to TotalExecTime, but
there was already ExecQueueTime, suggesting Exec to be some sort of
group
API consumers should be able to get insight into the query run times.
The UI currently measures total roundtrip times. This PR allows for more
fine grained metrics to be exposed.
* adds new timer for total execution time (queue + eval)
* expose new timer, queue timer, and eval timer in stats field of the
range query response:
```json
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [],
"stats": {
"execQueueTimeNs": 4683,
"execTotalTimeNs": 2086587,
"totalEvalTimeNs": 2077851
}
}
}
```
* stats field is optional, only set when query parameter `stats` is not
empty
Try it via
```sh
curl 'http://localhost:9090/api/v1/query_range?query=up&start=1486480279&end=1486483879&step=14000&stats=true'
```
Review feedback
* moved query stats json generation to query_stats.go
* use seconds for all query timers
* expose all timers available
* Changed ExecTotalTime string representation from Exec queue total time to Exec total time
This means that if there is no stale marker, only the usual staleness
delta (5m) applies.
It has occured to me that there is an oddity in the heurestic. It works
fine as long as you have 2 points within the last 5m, but breaks down
when the time window advances to the point where you have just 1 point.
Consider you had points at t=0 and t=10. With the heurestic it goes stale
at t=51, up until t=300. However from t=301 until t=310 we only
see the t=10 point and the series comes back to life. That is not
desirable.
I don't see a way to keep this form of heurestic working given this
issue, so thus I'm removing it.
* Re-add contexts to storage.Storage.Querier()
These are needed when replacing the storage by a multi-tenant
implementation where the tenant is stored in the context.
The 1.x query interfaces already had contexts, but they got lost in 2.x.
* Convert promql.Engine to use native contexts
To cover the cases where stale markers may not be available,
we need to infer the interval and mark series stale based on that.
As we're lacking stale markers this is less accurate, however
it should be good enough for these cases.
We need 4 intervals as if say we had data at t=0 and t=10,
coming via federation. The next data point should be at t=20 however it
could take up to t=30 for it actually to be ingested, t=40 for it to be
scraped via federation and t=50 for it to be ingested.
We then add 10% on to that for slack, as we do elsewhere.
For instant vectors, if "stale" is the newest sample
ignore the timeseries.
For range vectors, filter out "stale" samples.
Make it possible to inject "stale" samples in promql tests.
Query and query_range should return the timestamp
at which an evaluation is performed, not the timestamp
of the data. This is as that's what query range asked
for, and we need to keep query consistent with that.
Query for a matrix remains unchanged, returning the literal
matrix.
Make the timestamp of instant vectors be the timestamp of the sample
rather than the evaluation. We were not using this anywhere, so this is
safe.
Add a function to return the timestamp of samples in an instant vector.
Fixes#1557
* Use request.Context() instead of a global map of contexts.
* Add some basic opentracing instrumentation on the query path.
* Remove tracehandler endpoint.
* Add max concurrent and current queries engine metrics
This commit adds two metrics to the promql/engine: the
number of max concurrent queries, as configured by the flag, and
the number of current queries being served+blocked in the engine.
This extracts Querier as an instantiateable and closeable object
rather than just defining extending methods of the storage interface.
This improves composability and allows abstracting query transactions,
which can be useful for transaction-level caches, consistent data views,
and encapsulating teardown.
This is based on https://github.com/prometheus/prometheus/pull/1997.
This adds contexts to the relevant Storage methods and already passes
PromQL's new per-query context into the storage's query methods.
The immediate motivation supporting multi-tenancy in Frankenstein, but
this could also be used by Prometheus's normal local storage to support
cancellations and timeouts at some point.
For Weaveworks' Frankenstein, we need to support multitenancy. In
Frankenstein, we initially solved this without modifying the promql
package at all: we constructed a new promql.Engine for every
query and injected a storage implementation into that engine which would
be primed to only collect data for a given user.
This is problematic to upstream, however. Prometheus assumes that there
is only one engine: the query concurrency gate is part of the engine,
and the engine contains one central cancellable context to shut down all
queries. Also, creating a new engine for every query seems like overkill.
Thus, we want to be able to pass per-query contexts into a single engine.
This change gets rid of the promql.Engine's built-in base context and
allows passing in a per-query context instead. Central cancellation of
all queries is still possible by deriving all passed-in contexts from
one central one, but this is now the responsibility of the caller. The
central query context is now created in main() and passed into the
relevant components (web handler / API, rule manager).
In a next step, the per-query context would have to be passed to the
storage implementation, so that the storage can implement multi-tenancy
or other features based on the contextual information.
This was only relevant so far for the benchmark suite as it would
recycle Expr for repetitions. However, the append is unnecessary as
each node is only inspected once when populating iterators, and
population must always start from scratch.
This also introduces error checking during benchmarks and fixes the so
far undetected test errors during benchmarking.
Also, remove a style nit (two golint warnings less…).
See discussion in
https://groups.google.com/forum/#!topic/prometheus-developers/bkuGbVlvQ9g
The main idea is that the user of a storage shouldn't have to deal with
fingerprints anymore, and should not need to do an individual preload
call for each metric. The storage interface needs to be made more
high-level to not expose these details.
This also makes it easier to reuse the same storage interface for remote
storages later, as fewer roundtrips are required and the fingerprint
concept doesn't work well across the network.
NOTE: this deliberately gets rid of a small optimization in the old
query Analyzer, where we dedupe instants and ranges for the same series.
This should have a minor impact, as most queries do not have multiple
selectors loading the same series (and at the same offset).
This offers new semantics in allowing on() for matching
two single-element vectors with no known common labels.
Previosuly this was often done using on(dummy).
This also allows making it explict that you meant
to do an aggregation without labels via by().
Fixes#1597.
PromQL only requires a much narrower interface than local.Storage in
order to run queries. Narrower interfaces are easier to replace and
test, too.
We could also change the web interface to use local.Querier, except that
we'll probably use appending functions from there in the future.
If the label doesn't exist on the one side, it's not copied.
All labels on the many inside are included, this is a breaking change
but likely low impact.
The labels listed in the group_ modifier will be copied from the one
side to the many side. It will be valid to specify no labels.
This is intended to replace the existing ON/GROUP_* support.,
The `unless` set operator can be used to return all vector elements from
the LHS which do not match the elements on the RHS. A use case is to
return all metrics for nodes which do not have a specific role:
node_load1 unless on(instance) chef_role{role="app"}
Fixes https://github.com/prometheus/prometheus/issues/1401
This remove the last (and in fact bogus) use of BoundaryValues.
Thus, a whole lot of unused (and arguably sub-optimal / ugly) code can
be removed here, too.
This will fix issue #1035 and will also help to make issue #1264 less
bad.
The fundamental problem in the current code:
In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.
This commit addresses the problem from two sides:
First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.
Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.
To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.
The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.
Benchmarks:
The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.
To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.
In addition to performance improvements, this commit removes about 150
LOC.
This has the advantage that the user doesn't need
to list all labels they want to keep (as with "by")
but without having to worry about inconsistent labels
as when there's only one time series (as with "keeping_common").
Almost all aggregation should use this rather than the existing
two options as it's much less error prone and easier to maintain
due to not having to always add in "job" plus whatever other common
job-level labels you have like "region".
When doing comparison operations on vectors, filtering
sometimes gets in the way and you have to go to a fair bit of
effort to workaround it in order to always return a result.
The 'bool' modifier instead of filtering returns 0/1 depending
on the result of the compairson.
This is also a prerequisite to removing plain scalar/scalar comparisons,
as it maintains the current behaviour under a new syntax.
The current behaviour produces values that are not
from rules or scrapes. So if for example I have
a boolean 0/1 it can be returned as 0.2344589. This
prevents a number of advanced use cases, introduces
race conditions and can produce misleading graphs.
This commit removes the possibility to have multi-statement queries
which had no full support anyway. This makes the caller responsible
for multi-statement semantics.
Multiple tests are no longer timing-dependent.
These changes allow to do range queries over scalar expressions.
Errors on bad types for range queries are now raised on query creation
rather than evaluation.
A high number of concurrent queries can slow each other down
so that none of them is reasonbly responsive. This commit limits
the number of queries being concurrently executed.
This copies the evaluation logic from the current rules/ package.
The new engine handles the execution process from query string to final result.
It provides query timeout and cancellation and general flexibility for
future changes.
functions.go: Add evaluation implementation. Slight changes to in/out data but
not to the processing logic.
quantile.go: No changes.
analyzer.go: No changes.
engine.go: Actually new part. Mainly consists of evaluation methods
which were not changed.
setup_test.go: Copy of rules/helpers_test.go to setup test storage.
promql_test.go: Copy of rules/rules_test.go.