promql: Extend testing framework to support native histograms
This includes both the internal testing framework as well as the rules unit test feature of promtool.
This also adds a bunch of basic tests. Many of the code level tests can now be converted to tests within the framework, and more tests can be added easily.
---------
Signed-off-by: Harold Dost <h.dost@criteo.com>
Signed-off-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
Signed-off-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Harold Dost <h.dost@criteo.com>
Co-authored-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
PR #12557 introduced the possibility of parsing multiple exemplars per
native histograms. It did so by requiring the `Exemplar` method of the
parser to be called repeatedly until it returns false. However, the
protobuf parser code wasn't correctly updated for the old case of a
single exemplar for a classic bucket (if actually parsed as a classic
bucket) and a single exemplar on a counter. In those cases, the method
would return `true` forever, yielding the same exemplar again and
again, leading to an endless loop.
With this fix, the state is now tracked and the single exemplar is
only returned once.
Signed-off-by: beorn7 <beorn@grafana.com>
Native histograms without observations and with a zero threshold of
zero look the same as classic histograms in the protobuf exposition
format. According to
https://github.com/prometheus/client_golang/issues/1127 , the idea is
to add a no-op span to those histograms to mark them as native
histograms. This commit enables Prometheus to detect that no-op span
and adds a doc comment to the proto spec describing the behavior.
Signed-off-by: beorn7 <beorn@grafana.com>
The bounds weren't really used so far, so no actual bug in the code so
far. But it's obviously confusing if the bounds returned by a
floatBucketIterator with a target schema different from the original
schema are wrong.
Signed-off-by: beorn7 <beorn@grafana.com>
If a float histogram has a zero bucket with a threshold of zero _and_
an empty zero bucket, it wasn't identified as a native histogram
because the `isNativeHistogram` helper function only looked at integer
buckets.
Signed-off-by: beorn7 <beorn@grafana.com>
Native histograms without a zero threshold aren't federated properly.
This adds a test to prove the specific failure mode, which is that
histograms with a zero threshold of zero are federated as classic
histograms.
The underlying reason is that the protobuf parser identifies a native
histogram by detecting a zero bucket or by detecting integer buckets.
Therefore, a float histogram with a zero threshold of zero and an
unpopulated zero bucket falls through the cracks (no integer buckets,
no zero bucket).
This commit also addse a test case for the latter.
Signed-off-by: beorn7 <beorn@grafana.com>
This has become a requirement for native histograms, as a single
histogram sample commonly has many buckets, so that providing many
exemplars makes sense.
Since OM text doesn't support native histograms yet, the test had to
be expanded to also support protobuf test cases.
Signed-off-by: beorn7 <beorn@grafana.com>
The problem was the following:
When trying to parse native histograms and classic histograms in
parallel, the parser would first parse the histogram proto messages as
a native histogram and then parse the same message again, but now as a
classic histogram. Afterwards, it would forget that it was dealing
with a metric family that contains native histograms and would parse
the rest of the metric family as classic histograms only. The fix is
to check again after being done with a classic histogram.
Signed-off-by: beorn7 <beorn@grafana.com>
Inline one call to `decodeString`, and skip decoding the value string
until we find a match for the name.
Do a quick check on the first character in each string,
and exit early if we've gone past - labels are sorted in order.
Also improve tests and benchmark:
* labels: test Get with varying lengths - it's not typical for Prometheus labels to all be the same length.
* extend benchmark with label not found
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Instead of unpacking every individual string, we skip to the point
where there is a difference, going 8 bytes at a time where possible.
Add benchmark for Compare; extend tests too.
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
This is a minor cosmetical change, but my IDE (and I guess many of them)
nests `labels_string.go` under `labels.go` because it assumes it's the
file generated by the `stringer` tool, which follows that naming
pattern.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
labels: dont compile regex matcher if we know its a literal
Signed-off-by: Michael Hoffmann <mhoffm@posteo.de>
Co-authored-by: Sharad <sharadgaur@gmail.com>
Handle more arithmetic operators and aggregators for native histograms
This includes operators for multiplication (formerly known as scaling), division, and subtraction. Plus aggregations for average and the avg_over_time function.
Stdvar and stddev will (for now) ignore histograms properly (rather than counting them but adding a 0 for them).
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.
This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.
Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:
1. Series created from classic _gauge_ histograms didn't get the
_sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.
Signed-off-by: beorn7 <beorn@grafana.com>
* labels: respect Set after Del in Builder
The implementations are not symmetric between `Set()` and `Del()`, so
we must be careful. Add tests for this, both in labels and in relabel
where the issue was reported.
Also make the slice implementation consistent re `slices.Contains`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.
The exceptions that I have found in our codebase are just these two:
* The `if else` is followed by an additional statement before the next
condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
used. In this case, using `switch` would require tagging the `for`
loop, which probably tips the balance.
Why are `switch` statements more readable?
For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.
I'm sure the aforemention wise coders can list even more reasons.
In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.
Signed-off-by: beorn7 <beorn@grafana.com>
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.
There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).
I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.
I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.
Signed-off-by: beorn7 <beorn@grafana.com>
Add a fast path for the common case that a string is less than 127 bytes
long, to skip a shift and the loop.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is a method used by some downstream projects; it was created to
optimize the implementation in `labels_string.go` but we should have one
for both implementations so the same code works with either.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Deleted labels are remembered, even if they were not in `base` or were
removed from `add`, so `base+add-del` could go negative.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Go spends some time initializing all the elements of these arrays to
zero, so reduce the size from 1024 to 128. This is still much bigger
than we ever expect for a set of labels.
(If someone does have more than 128 labels it will still work, but via
heap allocation.)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.
Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.
Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Although we had a different slice, the underlying memory was the same so
any changes meant we could skip some values.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Save work converting between Builder and Labels.
Also expose ProcessBuilder, so callers can supply a Builder.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This lets relabelling work on a `Builder` rather than converting to and
from `Labels` on every rule.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The difference is modest, but we've used `slices.Sort` in lots of other
places so why not here.
name old time/op new time/op delta
Builder 1.04µs ± 3% 0.95µs ± 3% -8.27% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
Builder 312B ± 0% 288B ± 0% -7.69% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Builder 2.00 ± 0% 1.00 ± 0% -50.00% (p=0.008 n=5+5)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This makes the buffer the correct size for the common case that labels
have only been added. It will be too large for the case that labels are
changed, but the current buffer resize logic in `appendLabelTo` doubles
the buffer, so a small over-estimate is better.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This change removes restrictions to allow adding exemplars
to all time series. It also contains some improvements in test values
so that it is easier to track what is tested.
The advantage of doing this is having a little less error-prone tests:
"yy" is not really descriptive but "counter-test" can give people
a better idea about what is tested so it is harder to make mistakes.
Closes gh-11982
Signed-off-by: Jonatan Ivanov <jonatan.ivanov@gmail.com>
Previous code was effectively doing BigEndian.Uint64, so call that and save time.
An md5.Sum result is always 16 bytes. The first 8 are not used in the result, just as before.
Signed-off-by: Renning Bruns <ren@renmail.net>
These benchmarks were testing things related to what Prometheus does, but not testing actual Prometheus code.
Moved the label-copying benchmark into the labels package.
This commit adds an alternate implementation for `labels.Labels`, behind
a build tag `stringlabels`.
Instead of storing label names and values as individual strings, they
are all concatenated into one string in this format:
[len][name0][len][value0][len][name1][len][value1]...
The lengths are varint encoded so usually a single byte.
The previous `[]string` had 24 bytes of overhead for the slice and 16
for each label name and value; this one has 16 bytes overhead plus 1
for each name and value.
In `ScratchBuilder.Overwrite` and `Labels.Hash` we use an unsafe
conversion from string to byte slice. `Overwrite` is explicitly unsafe,
but for `Hash` this is a pure performance hack.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Parsing errors in the Prometheus HTTP format parser are very hard to
investigate since they only approximately indicate what is going wrong
in the parser and don't provide any information about the incorrect
input. As such it is very hard to tell what is wrong in the format
exposed by the application.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
This commit adds a new 'keep_firing_for' field to Prometheus alerting
rules. The 'resolve_delay' field specifies the minimum amount of time
that an alert should remain firing, even if the expression does not
return any results.
This feature was discussed at a previous dev summit, and it was
determined that a feature like this would be useful in order to allow
the expression time to stabilize and prevent confusing resolved messages
from being propagated through Alertmanager.
This approach is simpler than having two PromQL queries, as was
sometimes discussed, and it should be easy to implement.
This commit does not include tests for the 'resolve_delay' field. This
is intentional, as the purpose of this commit is to gather comments on
the proposed design of the 'resolve_delay' field before implementing
tests. Once the design of the 'resolve_delay' field has been finalized,
a follow-up commit will be submitted with tests."
See https://github.com/prometheus/prometheus/issues/11570
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
If a (float or integer) histogram is a gauge histogram, set the
CounterResetHint accordingly. (The default value is fine for the
normal counter histograms.)
Signed-off-by: beorn7 <beorn@grafana.com>
With this commit, the parser stops to see a gauge histogram (whether
native or conventional) as an unexpected metric type. It ingests it
normally, it even sets the `GaugeHistogram` type in the metadata (as
it has already done for a conventional gauge histogram scraped using
OpenMetrics), but it otherwise treats it as a normal counter-like
histogram.
Once #11783 is merged, though, it should be very easy to utilize the
type information.
Signed-off-by: beorn7 <beorn@grafana.com>
So far, the parser hasn't validated that the type is valid in the
`Next()` call. Later, in the `Series()` call, however, it assumes that
we will only see valid types and therefore panics with `encountered
unexpected metric type, this is a bug`.
This commit fixes said bug by adding validation to the `Next()` call.
Signed-off-by: beorn7 <beorn@grafana.com>
In some cases, the Prometheus HTTP format parser was not returning the
right token in the error output which made debugging impossible.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Instead of passing in a `ScratchBuilder` and `Labels`, just pass the
builder and the caller can extract labels from it. In many cases the
caller didn't use the Labels value anyway.
Now in `Labels.ScratchBuilder` we need a slightly different API: one
to assign what will be the result, instead of overwriting some other
`Labels`. This is safer and easier to reason about.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Without changing the definition of `labels.Labels`, add methods which
enable code using it to work without knowledge of the internals.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
For performance reasons we may use a different implementation of Hash()
in future, so note this so callers can be warned.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The bucket receiving math.MaxFloat64 observations now has
math.MaxFloat64 as upper bound, while the bucket after it (the last
possible bucket) has +Inf.
This also adds a test for getBound and moves the getBound code to
generic.go (where it should have been in the first place).
Signed-off-by: beorn7 <beorn@grafana.com>
* histogram: Simplify iterators
We don't really need currLower and currUpper and can calculate it when
needed (as already done for the floatBucketIterator). The calculation
is cheap, while keeping those extra variables around costs RAM
(potentially a lot with many iterators).
* histogram: Convert Bucket/FloatBucket to one generic type
* histogram: Move some bucket iterator code into generic base iterator
* histogram: Remove cumulative iterator for FloatHistogram
We added it in the past for completeness (Histogram has one), but it
has never been used. Plus, even the cumulative iterator for Histogram
is only there for test reasons.
We can always add it back, and then maybe even using generics.
Signed-off-by: beorn7 <beorn@grafana.com>
Where the code was multiplying bytes by number of operations, this
resulted in absurdly high throughput numbers.
Also, in `BenchmarkParse()`, don't run the `expfmt` case twice.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
And use the new method to call to compact Histograms during
parsing. This happens for both `Histogram` and `FloatHistogram`. In
this way, if targets decide to optimize the exposition size by merging
spans with empty buckets in between, we still get a normalized
results. It will also normalize away any valid but weird
representations like empty spans, spans with offset zero, and empty
buckets at the start or end of a span.
The implementation seemed easy at first as it just turns the
`compactBuckets` helper into a generic function (which now got its own
file). However, the integer Histograms have delta buckets instead of
absolute buckets, which had to be treated specially in the generic
`compactBuckets` function. To make sure it works, I have added plenty
of explicit tests for `Histogram` in addition to the `FloatHistogram`
tests.
I have also updated the doc comment for the `Compact` method.
Based on the insights now expressed in the doc comment, compacting
with a maxEmptyBuckets > 0 is rarely useful. Therefore, this commit
also sets the value to 0 in the two cases we were using 3 so far. We
might still want to reconsider, so I don't want to remove the
maxEmptyBuckets parameter right now.
Signed-off-by: beorn7 <beorn@grafana.com>
Replacing code which assumes the internal structure of `Labels`.
Add a convenience function `EmptyLabels()` which is more efficient than
calling `New()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update go to 1.19, set min version to 1.18
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Update golangci-lint
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* model/relabel: Add benchmark
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use Builder across relabels
Saves memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* labels.Builder: allow re-use of result slice
This reduces memory allocations where the caller has a suitable slice available.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use source values slice
To reduce memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Unwind one change causing test failures
Restore original behaviour in PopulateLabels, where we must not overwrite the input set.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* relabel: simplify values optimisation
Use a stack-based array for up to 16 source labels, which will be the
vast majority of cases.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* lint
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Append metadata to the WAL
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Remove extra whitespace; Reword some docstrings and comments
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Use RLock() for hasNewMetadata check
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Use single byte for metric type in RefMetadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Update proposed WAL format for single-byte type metadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Implementa MetadataAppender interface for the Agent
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Address first round of review comments
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Amend description of metadata in wal.md
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Correct key used to retrieve metadata from cache
When we're setting metadata entries in the scrapeCace, we're using the
p.Help(), p.Unit(), p.Type() helpers, which retrieve the series name and
use it as the cache key. When checking for cache entries though, we used
p.Series() as the key, which included the metric name _with_ its labels.
That meant that we were never actually hitting the cache. We're fixing
this by utiling the __name__ internal label for correctly getting the
cache entries after they've been set by setHelp(), setType() or
setUnit().
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Put feature behind a feature flag
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix AppendMetadata docstring
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Reorder WAL format document
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Change error message of AppendMetadata; Fix access of s.meta in AppendMetadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Reuse temporary buffer in Metadata encoder
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Only keep latest metadata for each refID during checkpointing
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix test that's referencing decoding metadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Avoid creating metadata block if no new metadata are present
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Add tests for corrupt metadata block and relevant record type
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix CR comments
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Extract logic about changing metadata in an anonymous function
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Implement new proposed WAL format and amend relevant tests
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Use 'const' for metadata field names
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Apply metadata to head memSeries in Commit, not in AppendMetadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Add docstring and rename extracted helper in scrape.go
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Add tests for tsdb-related cases
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix linter issues vol1
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix linter issues vol2
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix Windows test by closing WAL reader files
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Use switch instead of two if statements in metadata decoding
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix review comments around TestMetadata* tests
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Add code for replaying WAL; test correctness of in-memory data after a replay
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Remove scrape-loop related code from PR
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Address first round of comments
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Simplify tests by sorting slices before comparison
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix test to use separate transactions
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Empty out buffer and record slices after encoding latest metadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix linting issue
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Update calculation for DroppedMetadata metric
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Rename MetadataAppender interface and AppendMetadata method to MetadataUpdater/UpdateMetadata
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Reuse buffer when encoding latest metadata for each series
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Fix review comments; Check all returned error values using two helpers
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Simplify use of helpers
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Satisfy linter
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
I'd like to unwrap errors returned from rulefmt but both Error and WrappedError types are missing Unwrap() method.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>