* expose hook for block querier
Signed-off-by: Ben Ye <benye@amazon.com>
* update comment
Signed-off-by: Ben Ye <benye@amazon.com>
* use defined type
Signed-off-by: Ben Ye <benye@amazon.com>
---------
Signed-off-by: Ben Ye <benye@amazon.com>
* Pass affected labels to MemPostings.Delete
As suggested by @bboreham, we can track the labels of the deleted series
and avoid iterating through all the label/value combinations.
This looks much faster on the MemPostings.Delete call. We don't have a
benchmark on stripeSeries.gc() where we'll pay the price of iterating
the labels of each one of the deleted series.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* add hook to allow head compaction to create multiple output blocks
Signed-off-by: Ben Ye <benye@amazon.com>
* change Compact interface; remove BlockPopulator changes
Signed-off-by: Ben Ye <benye@amazon.com>
* rebase main
Signed-off-by: Ben Ye <benye@amazon.com>
* fix lint
Signed-off-by: Ben Ye <benye@amazon.com>
* fix unit test
Signed-off-by: Ben Ye <benye@amazon.com>
* address feedbacks; add unit test
Signed-off-by: Ben Ye <benye@amazon.com>
* Apply suggestions from code review
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Update tsdb/compact_test.go
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
---------
Signed-off-by: Ben Ye <benye@amazon.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
The only call we have to LabelValuesFor() has an index.Postings, and we
expand it to pass to this method, which will iterate over the values.
That's a waste of resources: we can iterate on the index.Postings
directly.
If there's any downstream implementation that has a slice of series,
they can always do an index.ListPostings from them: doing that is
cheaper than expanding an abstract index.Postings.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* MemPostings.PostingsForLabelMatching: let mutex go
This changes the `MemPostings.PostingsForLabelMatching` implementation
to stop holding the read mutex while matching the label values.
We've seen that this method can be slow when the matcher is expensive,
that's why we even added a context expiration check.
However, there are critical process that might be waiting on this mutex:
writes (adding new series) and compaction (deleting the
garbage-collected ones), so we should avoid holding it for a long period
of time.
Given that we've copied the values to a slice anyway, there's no need to
hold the lock while matching.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* MemPostings: reduce locking/unlocking
MemPostings.Delete is called from Head.gc(), i.e. it gets the IDs of the
series that have churned.
I'd assume that many label values aren't affected by that churn at all,
so it doesn't make sense to touch the lock while checking them.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It's quite common during the compaction cycle to hold series IDs for
series that aren't in the TSDB head anymore.
We shouldn't fail if that happens, as the caller has no way to figure
out which one of the IDs doesn't exist.
Fixes https://github.com/prometheus/prometheus/issues/14278
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* expose hook in tsdb to allow customizing compactor
Signed-off-by: Ben Ye <benye@amazon.com>
* address comment
Signed-off-by: Ben Ye <benye@amazon.com>
---------
Signed-off-by: Ben Ye <benye@amazon.com>
Now the error will include the timestamp and the existing and new values.
When you are trying to track down the source of this error, it can be
useful to see that the values are close, or alternating, or something
else.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Test with different amounts of capacity and exemplars, so that sometimes
new exemplars are evicting older exemplars.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Follow up on https://github.com/prometheus/prometheus/pull/14096
As promised, I bring a benchmark, which shows a very small improvement
if context is checked every 128 iterations of label instead of every
100.
It's much easier for a computer to check modulo 128 than modulo 100.
This is a very small 0-2% improvement but I'd say this is one of the
hottest paths of the app so this is still relevant.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Followup to #14096
Unfortunately the previous PR introduced this bug by not releasing the
lock before returning.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* tsdb: check for context cancel before regex matching postings
Regex matching can be heavy if the regex takes a lot of cycles to
evaluate and we can get stuck evaluating postings for a long time
without this fix. The constant checkContextEveryNIterations=100
may be changed later.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Add method `PostingsForLabelMatching` to `tsdb.IndexReader`, to obtain postings for labels with a certain name and values accepted by a provided callback, and use it from `tsdb.PostingsForMatchers`.
The intention is to optimize regexp matcher paths, especially not having to load all label values before matching on them.
Plus tests, and refactor some `tsdb/index.Reader` methods.
Benchmarking shows memory reduction up to ~100%, and speedup of up to ~50%.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
use it in loadDataAsQueryable to make sure the RO Head doesn't truncate or cut new chunks in data/chunks_head/.
add a -sandbox-dir-root flag to "promtool tsdb dump/dump-openmetrics" to control the root of that sandbox dirrectory.
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Thanos can create and destroy TSDBs dynamically, and once a TSDB
disappears its files are deleted. Calculating the size of the
WAL then fails with errors like:
```
msg: "Failed to calculate size of "wal" dir", "err": "lstat
/tsdbdir/wal: no such file or directory", "caller": "wlog.go:271"
```
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Signed-off-by: Jonathan Halterman <jonathan@grafana.com>
Signed-off-by: Jonathan Halterman <jhalterman@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>