For instance `require.NoError` will print the unexpected error; we don't
need to include it in the message.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
'defer' runs at the end of the whole function; we should close each
segment file as soon as we finished reading it.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This PR is a reference implementation of the proposal described in #10420.
In addition to what described in #10420, in this PR I've introduced labels.StableHash(). The idea is to offer an hashing function which doesn't change over time, and that's used by query sharding in order to get a stable behaviour over time. The implementation of labels.StableHash() is the hashing function used by Prometheus before stringlabels, and what's used by Grafana Mimir for query sharding (because built before stringlabels was a thing).
Follow up work
As mentioned in #10420, if this PR is accepted I'm also open to upload another foundamental piece used by Grafana Mimir query sharding to accelerate the query execution: an optional, configurable and fast in-memory cache for the series hashes.
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Mutex is 8 bytes; RWMutex is 24 bytes and much more complicated. Since
`RLock` is only used in two places, `UpdateMetadata` and `Delete`,
neither of which are hotspots, we should use the cheaper one.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Lifting an optimisation from Agent code, `seriesHashmap.del` can use
the unique series reference, doesn't need to check Labels.
Also streamline the logic for deleting from `unique` and `conflicts` maps,
and add some comments to help the next person.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Add test case to TestHeadLabelValuesWithMatchers, while fixing a couple
of typos in other test cases. Also enclosing some implicit sub-tests in a
`t.Run` call to make them explicitly sub-tests.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Optimize histogram iterators
Histogram iterators allocate new objects in the AtHistogram and
AtFloatHistogram methods, which makes calculating rates over long
ranges expensive.
In #13215 we allowed an existing object to be reused
when converting an integer histogram to a float histogram. This commit follows
the same idea and allows injecting an existing object in the AtHistogram and
AtFloatHistogram methods. When the injected value is nil, iterators allocate
new histograms, otherwise they populate and return the injected object.
The commit also adds a CopyTo method to Histogram and FloatHistogram which
is used in the BufferedIterator to overwrite items in the ring instead of making
new copies.
Note that a specialized HPoint pool is needed for all of this to work
(`matrixSelectorHPool`).
---------
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Don't calculate postings beforehand: we may not need them. If all
matchers are for the requested label, we can just filter its values.
Also, if there are no values at all, no need to run any kind of
logic.
Also add more labelValuesWithMatchers benchmarks
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
This functionality is needed in downstream projects because they have a
separate component that does compaction.
Upstreaming
7c8e9a2a76/tsdb/compact.go (L323-L325).
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
* tsdb/{index,compact}: allow using custom postings encoding format
We would like to experiment with a different postings encoding format in
Thanos so in this change I am proposing adding another argument to
`NewWriter` which would allow users to change the format if needed.
Also, wire the leveled compactor so that it would be possible to change
the format there too.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
* tsdb/compact: use a struct for leveled compactor options
As discussed on Slack, let's use a struct for the options in leveled
compactor.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
* tsdb: make changes after Bryan's review
- Make changes less intrusive
- Turn the postings encoder type into a function
- Add NewWriterWithEncoder()
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
---------
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
This reverts commit 2ddb3596ef.
Various tests are failing in CI after this change; reverting to free up
other work.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
They are used in multiple repos, so common is a better place for them.
Several packages now don't depend on `model/textparse`, e.g.
`storage/remote`.
Also remove `metadata` struct from `api.go`, since it was identical to
a struct in the `metadata` package.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In https://github.com/prometheus/prometheus/pull/13276 we started reusing float histogram objects to reduce allocations in PromQL.
That PR introduces a bug where histogram pointers gets copied to the beginning of the histograms slice,
but are still kept in the end of the slice. When a new histogram is read into the last element,
it can overwrite a previous element because the pointer is the same.
This commit fixes the issue by moving outdated points to the end of the slice
so that we don't end up with duplicate pointers in the same buffer. In other words,
the slice gets rotated so that old objects can get reused.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Digging around the TSDB code and I've found that this flag is unused so
let's remove it.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
* Append created timestamps.
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Log when created timestamps are ignored
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Proposed changes to Append CT PR.
Changes:
* Changed textparse Parser interface for consistency and robustness.
* Changed CT interface to be more explicit and handle validation.
* Simplified test, change scrapeManager to allow testability.
* Added TODOs.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Updates.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Addressed comments.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Refactor head_appender test
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Fix linter issues
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Use model.Sample in head appender test
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
---------
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
Signed-off-by: bwplotka <bwplotka@gmail.com>
Co-authored-by: bwplotka <bwplotka@gmail.com>
It's faster.
Note change to test - instead of requiring that the data structure is
identical to `EmptyPostings()`, check that calling `Next()` returns
false, which implies it was empty.
Also the check for context cancellation during initialization was
removed. Initialization should be a small portion of the work done
during merge, so it's not worth plumbing a context argument through.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The 'ToFloat' method on integer histograms currently allocates new memory
each time it is called.
This commit adds an optional *FloatHistogram parameter that can be used
to reuse span and bucket slices. It is up to the caller to make sure the
input float histogram is not used anymore after the call.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
* Fix histogram append errors
We should check counterReset condition rather than okToAppend because if
there's a counter reset, okToAppend is always set to false.
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
When no samples are returned in a chunk because all the samples have
been deleted, the chunk iterator then stops without iterating through
any remaining chunks.
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Fix tsdb.stripeSeries.gc so it handles conflicts properly
tsdb.stripeSeries.gc needs to prune seriesHashmap.conflicts first,
otherwise seriesHashmap replaces the unique field with the first among
the conflicts. Also add regression test.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* TestStripeSeries_gc: Support stringlabels, don't use internals
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
The ChunkReader interface's Chunk() has been changed to ChunkOrIterable().
This is a precursor to OOO native histogram support - with OOO native histograms, the chunks.Meta passed to Chunk() can result in multiple chunks being returned rather than just a single chunk (e.g. if oooMergedChunk has a counter reset in the middle).
To support this, ChunkOrIterable() requires either a single chunk or an iterable to be returned. If an iterable is returned, the caller has the responsibility of converting the samples from the iterable into possibly multiple chunks. The OOOHeadChunkReader now returns an iterable rather than a chunk to prepare for the native histograms case. Also as a beneficial side effect, oooMergedChunk and boundedChunk has been simplified as they only need to implement the Iterable interface now, not the full Chunk interface.
---------
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
* Fix tsdb.seriesHashmap.set by making receiver a pointer
The method tsdb.seriesHashmap.set currently doesn't set the conflicts
field properly, due to the receiver being a non-pointer. Fix by turning
the receiver into a pointer, and add a corresponding regression test.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* Add failing test.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Don't run OOO head garbage collection while reads are running.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add further test cases for different order of operations.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure all queriers are closed if `DB.blockChunkQuerierForRange()` fails.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure all queriers are closed if `DB.Querier()` fails.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Invert error handling in `DB.Querier()` and `DB.blockChunkQuerierForRange()` to make it clearer
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure that queries that touch OOO data can't block OOO head garbage collection forever.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Address PR feedback: fix parameter name in comment
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Signed-off-by: Charles Korn <charleskorn@users.noreply.github.com>
* Address PR feedback: use `lastGarbageCollectedMmapRef`
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Address PR feedback: ensure pending reads are cleaned up if creating an OOO querier fails
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
Signed-off-by: Charles Korn <charleskorn@users.noreply.github.com>
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
* Make head block ULIDs descriptive
As far as I understand, these ULIDs aren't persisted anywhere, so it
should be safe to change them.
When debugging an issue, seeing an ULID like
`2ZBXFNYVVFDXFPGSB1CHFNYQTZ` or `33DXR7JA39CHDKMQ9C40H6YVVF` isn't very
helpful, so I propose to make them readable in their ULID string
version.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Set a different ULID for RangeHead
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Fix and improve ingesting exemplars for native histograms.
See code comment for a detailed explanation of the algorithm.
Note that this changes the current behavior for all kind of samples slightly: We now allow exemplars with the same timestamp as during the last scrape if the value or the labels have changed.
Also note that we now do not ingest exemplars without timestamps for native histograms anymore.
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: zenador <zenador@users.noreply.github.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
* Labels: reduce allocations when creating from TSDB
When reading the WAL, by passing references into the buffer we can avoid
copying strings under `-tags stringlabels`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Fix panic during tsdb Commit
Fixes the following
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x19deb45]
goroutine 651118930 [running]:
github.com/prometheus/prometheus/tsdb.(*headAppender).Commit(0xc19100f7c0)
/drone/src/vendor/github.com/prometheus/prometheus/tsdb/head_append.go:855 +0x245
github.com/prometheus/prometheus/tsdb.dbAppender.Commit({{0x35bd6f0?, 0xc19100f7c0?}, 0xc000fa4c00?})
/drone/src/vendor/github.com/prometheus/prometheus/tsdb/db.go:1159 +0x2f
We theorize that the panic happened due the the series referenced by the
exemplar being removed between AppendExemplar and Commit due to being idle.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
When samples are committed in the head, they are also written to the WAL.
The order of WAL records should be sample then exemplar, but this was
not the case for native histogram samples. This PR fixes that.
The problem with the wrong order is that remote write reads the WAL and
sends the recorded timeseries in the WAL order, which means exemplars
arrived before histogram samples. If the receiving side is Prometheus
TSDB and the series has not existed before then the exemplar does not
currently create the series. Which means the exemplar is rejected and lost.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Instead of a map of slices of `*memSeries`, ready for any of them to
hold series where hash values collide, split into a map of `*memSeries`
and a map of slices which is usually empty, since hash collisions are
a one-in-a-billion thing.
The `del` method gets more complicated, to maintain the invariant that
a series is only in one of the two maps.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Extract the middle of the loop into a function, so it will be
easier to modify the `seriesHashmap` data structure.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Drop context argument from tsdb/index.Symbols.Lookup since lookup
should be fast and the context checking is a performance hit.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* Remove NewPossibleNonCounterInfo until it can be made more efficient, and avoid creating empty annotations as much as possible
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
When Prometheus restarts it creates every series read in from the WAL,
but many of those series will be finished, and never receive any more
samples. By defering allocation of the txRing slice to when it is first
needed, we save 32 bytes per stale series.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
When reading the WAL this method is called with buffers from a pool, on
multiple goroutines. Pre-allocating sufficient size avoids slow growth
and many reallocations in `append`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
On a 32 bit architecture the size of int is 32 bits. Thus converting from
int64, uint64 can overflow it and flip the sign.
Try for yourself in playground:
package main
import "fmt"
func main() {
x := int64(0x1F0000001)
y := int64(1)
z := int32(x - y) // numerically this is 0x1F0000000
fmt.Printf("%v\n", z)
}
Prints -268435456 as if x was smaller.
Followup to #12650
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
Reverts change from https://github.com/prometheus/prometheus/pull/12906
The benchmarks show that it's slower when intersecting, which is a
common usage for ListPostings (when intersecting matchers from Head)
(old is before #12906, new is #12906):
│ old │ new │
│ sec/op │ sec/op vs base │
Intersect/LongPostings1-16 20.54µ ± 1% 21.11µ ± 1% +2.76% (p=0.000 n=20)
Intersect/LongPostings2-16 51.03m ± 1% 52.40m ± 2% +2.69% (p=0.000 n=20)
Intersect/ManyPostings-16 194.2m ± 3% 332.1m ± 1% +71.00% (p=0.000 n=20)
geomean 5.882m 7.161m +21.74%
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It's implicit, but should be explicit. It is invalid to call At() after
a failed call to Next() or Seek().
Following up on https://github.com/prometheus/prometheus/pull/12906
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
The Next() call of ListPostings() was updating two values, while we can
just update the position. This is up to 30% faster for high number of
Postings.
goos: linux
goarch: amd64
pkg: github.com/prometheus/prometheus/tsdb/index
cpu: 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz
│ old │ new │
│ sec/op │ sec/op vs base │
ListPostings/count=100-16 819.2n ± 0% 732.6n ± 0% -10.58% (p=0.000 n=20)
ListPostings/count=1000-16 2.685µ ± 1% 2.017µ ± 0% -24.88% (p=0.000 n=20)
ListPostings/count=10000-16 21.43µ ± 1% 14.81µ ± 0% -30.91% (p=0.000 n=20)
ListPostings/count=100000-16 209.4µ ± 1% 143.3µ ± 0% -31.55% (p=0.000 n=20)
ListPostings/count=1000000-16 2.086m ± 1% 1.436m ± 1% -31.18% (p=0.000 n=20)
geomean 29.02µ 21.41µ -26.22%
We're talking about microseconds here, but they just keep adding.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
This avoids situations where metrics are scraped before the data they
are trying to look at is initialized.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Currently, the two goroutines race against each other and it's possible that the main test goroutine finishes way earlier than appendSeries has had a chance to run at all.
I tested this change by breaking the code that X fixed and running the race test 100 times. Without the additional time.Sleep the test failed 11 times. With the sleep it failed 65 out of the 100 runs. Which is still not ideal, but it's a step forward.
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
The test was introduced in # but was changed during the code review and not reran with the faulty code since then.
Closes #
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Before cutting a new XOR chunk in case the chunk goes over the size
limit, check that the timestamp is in order and not equal or older
than the latest sample in the old chunk.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
TestHeadDetectsDuplcateSampleAtSizeLimit tests a regression where a
duplicate sample,is appended to the head, right when the head chunk is
at the size limit. The test adds all samples as duplicate, thus
expecting that the result has exactly half of the samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Return annotations (warnings and infos) from PromQL queries
This generalizes the warnings we have already used before (but only for problems with remote read) as "annotations".
Annotations can be warnings or infos (the latter could be false positives). We do not treat them different in the API for now and return them all as "warnings". It would be easy to distinguish them and return infos separately, should that appear useful in the future.
The new annotations are then used to create a lot of warnings or infos during PromQL evaluations. Partially these are things we have wanted for a long time (e.g. inform the user that they have applied `rate` to a metric that doesn't look like a counter), but the new native histograms have created even more needs for those annotations (e.g. if a query tries to aggregate float numbers with histograms).
The annotations added here are not yet complete. A prominent example would be a warning about a range too short for a rate calculation. But such a warnings is more tricky to create with good fidelity and we will tackle it later.
Another TODO is to take annotations into account when evaluating recording rules.
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Case a) empty span is at the beginning of the spans.
Case b) two consequtive empty spans with positive offsets.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Benchmark WBL
Extended WAL benchmark test with WBL parts too - added basic cases for
OOO handling - a percentage of series have a percentage of samples set
as OOO ones.
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Fix handling of explicit counter reset header in histograms.
Explicit counter reset were being ignored.
Also there was no unit test coverage.
Add test case for the first sample in a chunk.
Add test case for non first sample in chunk.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
---------
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Otherwise we have a highly unusual situation of over 100 chunks
in the headChunks list of each series, which heavily skews
performance.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
promql: Extend testing framework to support native histograms
This includes both the internal testing framework as well as the rules unit test feature of promtool.
This also adds a bunch of basic tests. Many of the code level tests can now be converted to tests within the framework, and more tests can be added easily.
---------
Signed-off-by: Harold Dost <h.dost@criteo.com>
Signed-off-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
Signed-off-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Harold Dost <h.dost@criteo.com>
Co-authored-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
`rand.Read` has been deprecated since Go 1.20
`crypto/rand.Read` is more appropriate
Ref: https://tip.golang.org/doc/go1.20
Signed-off-by: Michal Biesek <michalbiesek@gmail.com>
Add a chunk size limit in bytes
This creates a hard cap for XOR chunks of 1024 bytes.
The limit for histogram chunk is also 1024 bytes, but it is a soft limit as a histogram has a dynamic size, and even a single one could be larger than 1024 bytes.
This also avoids cutting new histogram chunks if the existing chunk has fewer than 10 histograms yet. In that way, we are accepting "jumbo chunks" in order to have at least 10 histograms in a chunk, allowing compression to kick in.
Signed-off-by: Justin Lei <justin.lei@grafana.com>
So far, `ValidateHistogram` would not detect if the count did not
include the count in the zero bucket. This commit fixes the problem
and updates all the tests that have been undetected offenders so far.
Note that this problem would only ever create false negatives, so we
never falsely rejected to store a histogram because of it.
On the other hand, `ValidateFloatHistogram` has been to strict with
the count being at least as large as the sum of the counts in all the
buckets. Float precision issues could create false positives here, see
products of PromQL evaluations, it's actually quite hard to put an
upper limit no the floating point imprecision. Users could produce the
weirdest expressions, maxing out float precision problems. Therefore,
this commit simply removes that particular check from
`ValidateFloatHistogram`.
Signed-off-by: beorn7 <beorn@grafana.com>
Simlar to cleanup of WAL files on startup, cleanup temporary
chunk_snapshot dirs. This prevents storage space leaks due to terminated
snapshots on shutdown.
Signed-off-by: SuperQ <superq@gmail.com>
When a particular SeriesLifecycleCallback tries to optimize and run
closer to the Head, keeping track of the HeadSeriesRef instead of the
labelsets, it's impossible to handle the PostDeletion callback properly
as there's no way to know which series refs were deleted from the head.
This changes the callback to provide the series refs alongside the
labelsets, so the implementation can choose what to do.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
The most common case is to have a nil error when appending series, so
let's check that first instead of checking the 3 error types first.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.
Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.
To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
If a new series is introduced in a storage.Appender instance, that
series should be written to the WAL once the storage.Appender is closed,
even on Rollback.
Previously, new series would only be written to the WAL when calling
Commit. However, because the series is stored in memory regardless,
subsequent calls to Commit may write samples to the WAL which reference
a series ID which that was never written.
Related to #11589. It's likely that this fix also resolves this issue,
but we need more testing from users to see if the problem persists after
this fix; there may be more cases where samples get written to the WAL
in Prometheus Agent mode without the corresponding series record.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
This has become a requirement for native histograms, as a single
histogram sample commonly has many buckets, so that providing many
exemplars makes sense.
Since OM text doesn't support native histograms yet, the test had to
be expanded to also support protobuf test cases.
Signed-off-by: beorn7 <beorn@grafana.com>
Snappy remains as the default compression but there is now a flag to switch
the compression algorithm.
Signed-off-by: Justin Lei <justin.lei@grafana.com>
* Apply matchers when fetching label values
Signed-off-by: Patrick Oyarzun <patrick.oyarzun@grafana.com>
* Avoid extra copying of label values
Signed-off-by: Patrick Oyarzun <patrick.oyarzun@grafana.com>
---------
Signed-off-by: Patrick Oyarzun <patrick.oyarzun@grafana.com>
Use AppendableGauge to detect corrupt chunk with gauge histograms.
Detect if first sample is a gauge but the chunk is not set up to contain
gauge histograms.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
Problem:
LabelValueStats - This will provide a list of the label names and memory used in bytes.
It is calculated by adding the length of all values for a given label name.
But internally Prometheus stores the name and the value independently for each series.
Solution:
MemPostings struct maintains the values to seriesRef map which is used
to get the number of series which contains the label values.
Using that LabelValueStats is calculated as: seriesCnt * len(value
name)
Signed-off-by: Baskar Shanmugam <baskar.shanmugam.career@gmail.com>
* WIP implement WAL watcher reading via notifications over a channel from
the TSDB code
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Notify via head appenders Commit (finished all WAL logging) rather than
on each WAL Log call
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix misspelled Notify plus add a metric for dropped Write notifications
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Update tests to handle new notification pattern
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* this test maybe needs more time on windows?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* does this test need more time on windows as well?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* read timeout is already a time.Duration
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* remove mistakenly commited benchmark data files
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* address some review feedback
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix missed changes from previous commit
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix issues from wrapper function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* try fixing race condition in test by allowing tests to overwrite the
read ticker timeout instead of calling the Notify function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix linting
Signed-off-by: Callum Styan <callumstyan@gmail.com>
---------
Signed-off-by: Callum Styan <callumstyan@gmail.com>
The code did not handle spans with 0 length properly.
Spans with length zero are now skipped in the comparison.
Span index check not done against length-1, since length is a unit32,
thus subtracting 1 leads to 2^32, not -1.
Fixes and unit tests for both integer and float histograms added.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
This changes usage of path to be replaced with path/filepath, allowing
for filepath.Base to properly return the base directory on systems where
`/` is not the standard path separator.
This resolves an issue on Windows where intermediate folders containing
a `.` were incorrectly considered to be a part of the checkpoint name.
Related to grafana/agent#3826.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
`head.deleted` holds the WAL segment in use at the time each series was
removed from the head. At the end of `truncateWAL()` we will delete
all segments up to `last`, so we can drop any series that were last seen
in a segment at or before that point.
(same change in Prometheus Agent too)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.
The exceptions that I have found in our codebase are just these two:
* The `if else` is followed by an additional statement before the next
condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
used. In this case, using `switch` would require tagging the `for`
loop, which probably tips the balance.
Why are `switch` statements more readable?
For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.
I'm sure the aforemention wise coders can list even more reasons.
In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.
Signed-off-by: beorn7 <beorn@grafana.com>
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.
There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).
I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.
I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.
Signed-off-by: beorn7 <beorn@grafana.com>
In the past, every sample value was a float, so it was fine to call a
variable holding such a float "value" or "sample". With native
histograms, a sample might have a histogram value. And a histogram
value is still a value. Calling a float value just "value" or "sample"
or "V" is therefore misleading. Over the last few commits, I already
renamed many variables, but this cleans up a few more places where the
changes are more invasive.
Note that we do not to attempt naming in the JSON APIs or in the
protobufs. That would be quite a disruption. However, internally, we
can call variables as we want, and we should go with the option of
avoiding misunderstandings.
Signed-off-by: beorn7 <beorn@grafana.com>
Previously, we had one “polymorphous” `sample` type in the `storage`
package. This commit breaks it up into `fSample`, `hSample`, and
`fhSample`, each still implementing the `tsdbutil.Sample` interface.
This reduces allocations in `sampleRing.Add` but inflicts the penalty
of the interface wrapper, which makes things worse in total.
This commit therefore just demonstrates the step taken. The next
commit will tackle the interface overhead problem.
Signed-off-by: beorn7 <beorn@grafana.com>
It seems that readOnlyDB was still opened which blocked the temp dir
cleanup.
Also changed the copy dir to be another TempDir instead of manually
creating one.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use zeropool.Pool to workaround SA6002
I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.
While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.
This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.
Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Improve BenchmarkHeadAppender with more cases
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* A little copying is better than a little dependency
https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix imports order
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add license header
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Copyright should be on one of the first 3 lines
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use require.Equal for testing
I don't depend on testify in my lib, but here we have it available.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Avoid flaky test
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Also use zeropool for pointsPool in engine.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
We need to make sure that `tsdb_errors.NewMulti` handles the errors.Is()
calls properly, like it's done in grafana/dskit.
Also we need to check that `errors.Is(err, context.Canceled)`, not that
`err == context.Canceled`.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Test that blocks are not marked as "compaction failed" during shutdown.
This shouldn't happen but this test currently fails.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.
Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.
Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Adds an affirmative log message for successful WAL repair
Signed-off-by: Vernon Miller <vernon.miller@grafana.com>
Signed-off-by: Vernon Miller <96601789+aldernero@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Without this fix, if snapshots were enabled, and wbl goes missing
between restarts, then TSDB does not recognize that there are ooo
mmap chunks on disk and we cannot query them until those chunks
are compacted into blocks.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
If the snapshot was enabled with some ooo mmap chunks on disk,
and wbl was removed between restarts, then we should still be able
to query the ooo mmap chunks after a restart. This test shows that
we are not able to query those ooo mmap chunks after a restart
under this situation.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
M-map chunks replayed on startup are discarded if there
was no WAL and no snapshot loaded, because there is no
series created in the Head that it can map to. So only
load m-map chunks from disk if there is either a snapshot
loaded or there is WAL on disk.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Adds `WALReplayConcurrency` as an option on tsdb `Options` and `HeadOptions`.
If it is not set or set <=0, then `GOMAXPROCS` is used, which matches the previous behaviour.
Signed-off-by: Yuri Nikolic <durica.nikolic@grafana.com>
These benchmarks are all testing things related to what Prometheus does,
so perhaps have some historical interest, but we should not retain them
in the main repo.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
`safeChunk` is only obtained from the `headChunkReader.Chunk` call where
the chunk is already fetched and stored with the `safeChunk`. So, when
getting the iterator for the `safeChunk`, we don't need to get the chunk again.
Also removed a couple of unnecessary fields from `safeChunk` as a part of this.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Export single ith test histogram generation functions
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Do not set counter reset hint for non-gauge histograms individually
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Apply suggestions from code review
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
---------
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
This is a bit more conservative than we could be. As long as a chunk
isn't the first in a block, we can be pretty sure that the previous
chunk won't disappear. However, the incremental gain of returning
NotCounterReset in these cases is probably very small and might not be
worth the code complications.
Wwith this, we now also pay attention to an explicitly set counter
reset during ingestion. While the case doesn't show up in practice
yet, there could be scenarios where the metric source knows there was
a counter reset even if it might not be visible from the values in the
histogram. It is also useful for testing.
Signed-off-by: beorn7 <beorn@grafana.com>
- Adjust doc comments to go1.19 style.
- Break down some overly long lines.
- Minor doc comment tweaks and fixes.
- Some renaming.
Some rationales for the last point:
I have renamed “interjections” into “inserts”, mostly because it is
shorter, and the word shows up a lot by now (and the concept is
cryptic enough to not obfuscate it even more with abbreviations).
I have also tried to find more descriptive naming for the “compare
spans” functions.
Signed-off-by: beorn7 <beorn@grafana.com>
This is an optimization on the existing append in OOOChunk.
What we've been doing so far is find the place inside the out-of-order
slice where the new sample should go in and then place it there and move
any samples to the right if necessary. This is OK but requires a binary
search every time the slice is bigger than 0.
The optimization is opinionated and suggests that although out-of-order
samples can be out-of-order amongst themselves they'll probably be in
order thus we can probably optimistically append at the end and if not
do the binary search.
OOOChunks are capped to 30 samples by default so this is a small
optimization but everything adds up, specially if you handle many active
timeseries with out-of-order samples.
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
* adapt code.go and write_handler.go to support float histograms
* adapt watcher.go to support float histograms
* wip adapt queue_manager.go to support float histograms
* address comments for metrics in queue_manager.go
* set test cases for queue manager
* use same counts for histograms and float histograms
* refactor createHistograms tests
* fix float histograms ref in watcher_test.go
* address PR comments
Signed-off-by: Marc Tuduri <marctc@protonmail.com>
This is to check if a gauge histogram can be appended to the given chunk.
If not, it tells what changes to make to the chunk and the histogram
if possible.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
In the head and in v1 postings on disk, it makes no difference whether
postings are sorted. Only for v2 does the code step through in order.
So, move the sorting to where it is required, and thus skip it entirely
in the head.
Label values in on-disk blocks are already sorted, but `slices.Sort` is
very fast on already-sorted data so we don't bother checking.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This allocates memory for all the returned values, which skews the
result. We aren't trying to benchmark `ExpandPostings`, so just step
through all the values without storing them to consume them.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Previously all the postings constructed were consumed on the first
iteration, so subsequent iterations did no work.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Instead of passing in a `ScratchBuilder` and `Labels`, just pass the
builder and the caller can extract labels from it. In many cases the
caller didn't use the Labels value anyway.
Now in `Labels.ScratchBuilder` we need a slightly different API: one
to assign what will be the result, instead of overwriting some other
`Labels`. This is safer and easier to reason about.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Implement decoding via labels.ScratchBuilder, which we retain and re-use
to reduce memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This necessitates a change to the `tsdb.IndexReader` interface:
`index.Reader` is used from multiple goroutines concurrently, so we
can't have state in it.
We do retain a `ScratchBuilder` in `blockBaseSeriesSet` which is
iterator-like.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Use simpler utility function to create Labels objects, making fewer
assumptions about the data structure.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Re-use previous memory if it is already of the correct type.
In `NewListSeries` we hoist the conversion to an interface value out
so it only allocates once.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Re-use previous memory if it is already of the correct type.
Also turn two levels of function closure into a single object that
holds the required data.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Patterned after `Chunk.Iterator()`: pass the old iterator in so it
can be re-used to avoid allocating a new object.
(This commit does not do any re-use; it is just changing all the method
signatures so re-use is possible in later commits.)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This adds negative buckets and access of float histograms to
TestHistogramChunkSameBuckets and TestHistogramChunkBucketChanges.
It also exercises a specific pattern of reusing an iterator (one where
no access has happened).
This exposes two bugs (where entries for positive buckets where used
where the corresponding entries for negative buckets should have been
used). One was fixed in #11627 (not merged), which triggered the work
in this commit.
This commit fixes both issues, so #11627 can be closed.
It also simplifies the code in the histogramIterator.Next method that
aims to recycle existing slice capacity.
Furthermore, this is on top of the release-2.40 branch because we
should probably cut a bugfix release for this.
Signed-off-by: beorn7 <beorn@grafana.com>
Typical parameters are one hour by 1 minute step, where the
function would allocate a slice of 3.6 million samples instead of 60.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Switch from 'sanity' to more inclusive lanuage
"Removing ableist language in code is important; it helps to create and
maintain an environment that welcomes all developers of all backgrounds,
while emphasizing that we as developers select the most articulate,
precise, descriptive language we can rather than relying on metaphors.
The phrase sanity check is ableist, and unnecessarily references mental
health in our code bases. It denotes that people with mental illnesses
are inferior, wrong, or incorrect, and the phrase sanity continues to be
used by employers and other individuals to discriminate against these
people."
From https://gist.github.com/seanmhanson/fe370c2d8bd2b3228680e38899baf5cc
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Inverting the test for chunks deleted by tombstones makes all three
rejections consistent, and also avoids the case where a chunk is
excluded but still causes `trimFront` or `trimBack` to be set.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This reduces garbage, hence goes faster, when a short time range is
required compared to the amount of chunks in the block. For example
recording rules and alerts often look only at the last few minutes.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add BenchmarkOpenBlock
* Use specific types when reading offset table
Instead of reading a generic-ish []string, we can read a generic type
which would be specifically labels.Label.
This avoid allocating a slice that escapes to the heap, making it both
faster and more efficient in terms of memory management.
* Update error message for unexpected number of keys
* s/posting offset table/postings offset table/
* Remove useless lastKey assignment
* Use two []bytes vars, simplify
Applied PR feedback: removed generics, moved the label indices reading
to that specific test as we're not using it in production anyway, we're
just testing what we've just built.
Also using two []bytes variables for name and value that use the backing
buffer instead of using strings, this reduces allocations a lot as we
only copy them when we store them (this is optimized by the compiler).
* Fix the dumb bug
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Marco Pracucci <marco@pracucci.com>
Function arguments in defer evaluated during definition of defer, not
during execution
Signed-off-by: Slavik Panasovets <slavik@google.com>
Signed-off-by: Slavik Panasovets <slavik@google.com>
With these changes, the "happy path" when the leading and trailing
number of bits don't need an update, fewer operations are needed.
The change is probably very marginal (no change in the benchmark added
here, but the benchmark also doesn't cover non-changing values), and
an argument could me made that avoiding pointers also has its
benefits.
However, I think that reducing the number of return values improves
readability. Which convinced me that I should at least propose this.
Signed-off-by: beorn7 <beorn@grafana.com>