Commit graph

312 commits

Author SHA1 Message Date
Bryan Boreham 5cea37c069
Merge pull request #12682 from bboreham/contains-same-label-set
promql engine: check unique labels using existing map

ContainsSameLabelset constructs a map with the same hash key as the one used to compile the output of rangeEval, so we can use that one and save work.

Need to hold the timestamp so we can be sure we saw the same series in the same evaluation.
2023-08-14 14:12:47 +01:00
Bryan Boreham 0670e4771a promql engine: check unique labels using existing map
`ContainsSameLabelset` constructs a map with the same hash key as
the one used to compile the output of `rangeEval`, so we can use that
one and save work.

Need to hold the timestamp so we can be sure we saw the same series
in the same evaluation.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-13 18:09:10 +01:00
Bryan Boreham 8d47b3d497
Merge pull request #12579 from charleskorn/timestamp
Don't recreate iterator for each series on each timestep when evaluating a query with `timestamp()`
2023-08-05 10:51:38 +01:00
Charles Korn 6087c555ed
Address PR feedback: clarify comment
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-08-01 13:30:10 +10:00
Charles Korn fb3935e8f9
Address PR feedback: rename method
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-08-01 13:30:07 +10:00
Julius Volz 531567d46e Drop metric name for "atan2" binary operator
The operator changes the meaning of the metric, so the metric name should
be dropped. Technically this would be a breaking change, but it's also very
obviously a bug and not likely that anyone depends on it.

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2023-07-24 14:36:02 +02:00
Charles Korn fde6ebb17d
Create per-series iterators only once per selector, rather than recreating it for each time step.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:21 +10:00
Charles Korn 993618adea
Don't create a new iterator for every time step.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:21 +10:00
Charles Korn b114c0888d
Simplify loop
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:20 +10:00
Charles Korn a142998052
Expand series set just once
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:19 +10:00
Charles Korn eeface2e17
Inline method
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:19 +10:00
Charles Korn a2a2cc757e
Extract timestamp special case to its own method.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-07-20 11:24:18 +10:00
Giedrius Statkevičius 3f230fc9f8 promql: convert QueryOpts to interface
Convert QueryOpts to an interface so that downstream projects like
https://github.com/thanos-community/promql-engine could extend the query
options with engine specific options that are not in the original
engine.

Will be used to enable query analysis per-query.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
2023-07-03 16:20:31 +03:00
Bryan Boreham 67d2ef004d Placate lint
I think the version using scoping was better, but I'm out of energy to fight the linter.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-06-01 18:36:34 +00:00
Bryan Boreham bb0d8320dd promql: include parsing in active-query tracking
So that the max-concurrency limit is applied.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-06-01 18:16:05 +00:00
Bryan Boreham 71fc4f1516 promql: refactor: create query object before parsing
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-06-01 17:54:17 +00:00
Bryan Boreham 1f3821379c promql: refactor: extract fn to wait on concurrency limit
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-06-01 17:17:04 +00:00
zenador 191bf9055b
Handle more arithmetic operators for native histograms (#12262)
Handle more arithmetic operators and aggregators for native histograms

This includes operators for multiplication (formerly known as scaling), division, and subtraction. Plus aggregations for average and the avg_over_time function.

Stdvar and stddev will (for now) ignore histograms properly (rather than counting them but adding a 0 for them).

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-05-16 21:15:20 +02:00
Justin Lei 7bbf24b707 Make MemoizedSeriesIterator not implement chunkenc.Iterator
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-05-03 12:45:39 -07:00
Justin Lei 6985dcbe73 Optimize and test MemoizedSeriesIterator
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-05-02 08:53:18 -07:00
Matthieu MOREL 7e9acc2e46
golangci-lint: remove skip-cache and restore singleCaseSwitch rule
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-20 18:43:51 +02:00
Julien Pivotto f7c6130ff2
Merge pull request #12251 from prymitive/query_samples_total
Add query_samples_total metric
2023-04-20 15:48:24 +02:00
Matthieu MOREL bae9a21200
Merge branch 'main' into linter/nilerr
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-19 19:56:39 +02:00
beorn7 5b53aa1108 style: Replace else if cascades with switch
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.

The exceptions that I have found in our codebase are just these two:

* The `if else` is followed by an additional statement before the next
  condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
  used. In this case, using `switch` would require tagging the `for`
  loop, which probably tips the balance.

Why are `switch` statements more readable?

For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.

I'm sure the aforemention wise coders can list even more reasons.

In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:22:31 +02:00
beorn7 c3c7d44d84 lint: Adjust to the lint warnings raised by current versions of golint-ci
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.

There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).

I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.

I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:10:10 +02:00
Ben Ye fd3630b9a3 add ctx to QueryEngine interface
Signed-off-by: Ben Ye <benye@amazon.com>
2023-04-17 21:32:38 -07:00
Matthieu MOREL fb3eb21230 enable gocritic, unconvert and unused linters
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-13 19:20:22 +00:00
beorn7 551de0346f promql: Do not return nil slices to the pool
Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-13 19:25:24 +02:00
beorn7 c0879d64cf promql: Separate Point into FPoint and HPoint
In other words: Instead of having a “polymorphous” `Point` that can
either contain a float value or a histogram value, use an `FPoint` for
floats and an `HPoint` for histograms.

This seemingly small change has a _lot_ of repercussions throughout
the codebase.

The idea here is to avoid the increase in size of `Point` arrays that
happened after native histograms had been added.

The higher-level data structures (`Sample`, `Series`, etc.) are still
“polymorphous”. The same idea could be applied to them, but at each
step the trade-offs needed to be evaluated.

The idea with this change is to do the minimum necessary to get back
to pre-histogram performance for functions that do not touch
histograms. Here are comparisons for the `changes` function. The test
data doesn't include histograms yet. Ideally, there would be no change
in the benchmark result at all.

First runtime v2.39 compared to directly prior to this commit:

```
name                                                  old time/op    new time/op    delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16            391µs ± 2%     542µs ± 1%  +38.58%  (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16           452µs ± 2%     617µs ± 2%  +36.48%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16         1.12ms ± 1%    1.36ms ± 2%  +21.58%  (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16        7.83ms ± 1%    8.94ms ± 1%  +14.21%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16           2.98ms ± 0%    3.30ms ± 1%  +10.67%  (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16          3.66ms ± 1%    4.10ms ± 1%  +11.82%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16         10.5ms ± 0%    11.8ms ± 1%  +12.50%  (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16        77.6ms ± 1%    87.4ms ± 1%  +12.63%  (p=0.000 n=9+9)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16       30.4ms ± 2%    32.8ms ± 1%   +8.01%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16      37.1ms ± 2%    40.6ms ± 2%   +9.64%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16      105ms ± 1%     117ms ± 1%  +11.69%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16     783ms ± 3%     876ms ± 1%  +11.83%  (p=0.000 n=9+10)
```

And then runtime v2.39 compared to after this commit:

```
name                                                  old time/op    new time/op    delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16            391µs ± 2%     547µs ± 1%  +39.84%  (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16           452µs ± 2%     616µs ± 2%  +36.15%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16         1.12ms ± 1%    1.26ms ± 1%  +12.20%  (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16        7.83ms ± 1%    7.95ms ± 1%   +1.59%  (p=0.000 n=10+8)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16           2.98ms ± 0%    3.38ms ± 2%  +13.49%  (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16          3.66ms ± 1%    4.02ms ± 1%   +9.80%  (p=0.000 n=10+9)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16         10.5ms ± 0%    10.8ms ± 1%   +3.08%  (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16        77.6ms ± 1%    78.1ms ± 1%   +0.58%  (p=0.035 n=9+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16       30.4ms ± 2%    33.5ms ± 4%  +10.18%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16      37.1ms ± 2%    40.0ms ± 1%   +7.98%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16      105ms ± 1%     107ms ± 1%   +1.92%  (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16     783ms ± 3%     775ms ± 1%   -1.02%  (p=0.019 n=9+9)
```

In summary, the runtime doesn't really improve with this change for
queries with just a few steps. For queries with many steps, this
commit essentially reinstates the old performance. This is good
because the many-step queries are the one that matter most (longest
absolute runtime).

In terms of allocations, though, this commit doesn't make a dent at
all (numbers not shown). The reason is that most of the allocations
happen in the sampleRingIterator (in the storage package), which has
to be addressed in a separate commit.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-13 19:25:16 +02:00
Łukasz Mierzwa b6573353c1 Add query_samples_total metric
query_samples_total is a counter that tracks the total number of samples loaded by all queries.

The goal with this metric is to be able to see the amount of 'work' done by Prometheus to service queries.
At the moment we have metrics with the number of queries, plus more detailed metrics showing how much time each step of a query takes.
While those metrics do help they don't show us the whole picture.
Queries that do load more samples are (in general) more expensive than queries that do load fewer samples.
This means that looking only at the number of queries doesn't tell us how much 'work' Prometheus received.
Adding a counter that tracks the total number of samples loaded allows us to see if there was a spike in the cost of queries, not just the number of them.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-04-12 14:05:06 +01:00
Ganesh Vernekar 5588cab8b2
Merge pull request #12173 from bboreham/builder-no-empty-labels
labels: simplify call to get Labels from Builder
2023-04-04 12:02:55 +05:30
Bryan Boreham 1bb6b8b309
Merge pull request #12190 from bboreham/faster-topk
promql: use faster heap method for topk/bottomk
2023-03-30 14:05:53 +01:00
Oleg Zaytsev 6e2905a4d4
Use zeropool.Pool to workaround SA6002 (#12189)
* Use zeropool.Pool to workaround SA6002

I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.

While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.

This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.

Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Improve BenchmarkHeadAppender with more cases

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* A little copying is better than a little dependency

https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Fix imports order

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Add license header

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Copyright should be on one of the first 3 lines

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Use require.Equal for testing

I don't depend on testify in my lib, but here we have it available.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Avoid flaky test

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Also use zeropool for pointsPool in engine.go

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

---------

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2023-03-29 20:34:34 +01:00
Bryan Boreham f2fd85df82 promql: use faster heap method for topk/bottomk
Call `Fix()` instead of `Pop()` followed by `Push()`.

This is slightly faster.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-28 11:07:31 +00:00
Bryan Boreham b987afa7ef labels: simplify call to get Labels from Builder
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.

Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.

Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-22 17:05:20 +00:00
Bryan Boreham 1b0a29701b promql: optimise aggregation with no labels
For a query like 'sum (foo)', we can quickly skip to the empty labels that its result needs.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-23 13:33:14 +00:00
Bryan Boreham aafef011b7 Promql: reuse LabelBuilder in aggregations
We have a LabelBuilder in EvalNodeHelper; use it instead of creating a new one at every step.

Need to take some care that different uses of enh.lb do not overlap.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-23 13:21:29 +00:00
Bryan Boreham 2c382f5e24 promql: extract function to initialize LabelBuilder
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-23 13:21:22 +00:00
Bryan Boreham 56fefcd812 Update package promql for new labels.Labels type
We use `labels.Builder` to parse metrics, to avoid depending on the
internal implementation. This is not efficient, but the feature is only
used in tests. It wasn't efficient previously either - calling `Sort()`
after adding each label.

`createLabelsForAbsentFunction` also uses a Builder now, and gets
an extra `map` to replace the previous `Has()` usage.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Fix up promql to compile with changes to Labels
2022-12-19 15:22:09 +00:00
Bryan Boreham 3c7de69059 storage: allow re-use of iterators
Patterned after `Chunk.Iterator()`: pass the old iterator in so it
can be re-used to avoid allocating a new object.

(This commit does not do any re-use; it is just changing all the method
signatures so re-use is possible in later commits.)

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-15 18:32:45 +00:00
Alan Protasio 8460807475 fix blank lines
Signed-off-by: Alan Protasio <approtas@amazon.com>
2022-12-14 13:24:10 -08:00
Alan Protasio f8f4ac14a8 Finishing evalSpanTimer always before return
Signed-off-by: Alan Protasio <approtas@amazon.com>
2022-12-14 13:10:35 -08:00
Jesus Vazquez e934d0f011 Merge 'main' into sparsehistogram
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
2022-10-05 22:14:49 +02:00
Giedrius Statkevičius a1d6ba59ac
promql: pass down subquery interval (#11163)
If we are populating series for a subquery then set the interval
parameter accordingly so that downstream users could use that
information.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
2022-09-30 20:13:38 +05:30
Bryan Boreham 3330d85ba8
Replace sort.Strings and sort.Ints with faster slices.Sort (#11318)
Use new experimental package `golang.org/x/exp/slices`.

slices.Sort works on values that are directly comparable, like ints,
so avoids the overhad of an interface call to `.Less()`.

Left tests unchanged, because they don't need the speed and it may be
a cross-check that slices.Sort gives the same answer.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-30 20:03:56 +05:30
Ganesh Vernekar 71489d0e3d
Fix count() for histograms and add test case
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 19:57:29 +05:30
Bryan Boreham 8b863c42dd
Optimise relabeling by re-using memory (#11147)
* model/relabel: Add benchmark

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* model/relabel: re-use Builder across relabels

Saves memory allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* labels.Builder: allow re-use of result slice

This reduces memory allocations where the caller has a suitable slice available.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* model/relabel: re-use source values slice

To reduce memory allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Unwind one change causing test failures

Restore original behaviour in PopulateLabels, where we must not overwrite the input set.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* relabel: simplify values optimisation

Use a stack-based array for up to 16 source labels, which will be the
vast majority of cases.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* lint

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-08-19 15:27:52 +05:30
beorn7 c9fd3c235d Merge branch 'main' into sparsehistogram 2022-08-10 17:54:37 +02:00
Vilius Pranckaitis 4660656312
Allow setting custom lookback delta for instant queries (#9946)
* Allow setting custom lookback delta for instant queries

Signed-off-by: Vilius Pranckaitis <vpranckaitis@gmail.com>
2022-08-02 11:15:39 +02:00
Łukasz Mierzwa 54a3c3ba3f
Print query that caused a panic (#10995)
We print the stacktrace of a panic when query causes one, but there's no
information about the query itself, which makes it harder to debug and
reproduce the issue.
This adds the 'expr' string to the logged panic.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-07-14 15:04:15 +05:30