* clarify backup requirements for storage
After reading this (again) recently, I was under the impression that our backup strategy ("just throw Bacula at it") was just not good enough and that our backups were inconsistent. I filed [an issue internally][41627] about this because of that concern.
But reading a conversation with @SuperQ on IRC, I came under the impression that only the WAL files would be lost. This is an attempt at documenting this more clearly.
[41627]: https://gitlab.torproject.org/tpo/tpa/team/-/issues/41627
---------
Signed-off-by: anarcat <anarcat@users.noreply.github.com>
Co-authored-by: Ben Kochie <superq@gmail.com>
* Pass affected labels to MemPostings.Delete
As suggested by @bboreham, we can track the labels of the deleted series
and avoid iterating through all the label/value combinations.
This looks much faster on the MemPostings.Delete call. We don't have a
benchmark on stripeSeries.gc() where we'll pay the price of iterating
the labels of each one of the deleted series.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Reduce the flakiness of TestAsyncRuleEvaluation
This tests sleeps for 15 millisecond per rule group, and then comprares
the entire execution time to be smaller than a multiple of that delay.
The ruleCount is 6, so it assumes that the test will come to the
assertions in less than 90ms.
Meanwhile, the Github's Windows runner:
- ...Huh, oh? What? How much time? milliwhat? Sorry I don't speak that.
TL;DR, this increases the delay to 250 millisecond. This won't prevent
the test from being flaky, but will reduce the flakiness by several
orders of magnitude and hopefully won't be an issue anymore.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Make tests parallel
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
When the label name is empty, which can happen now with quoted label
name, it should be quoted when printed as a string again.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* add hook to allow head compaction to create multiple output blocks
Signed-off-by: Ben Ye <benye@amazon.com>
* change Compact interface; remove BlockPopulator changes
Signed-off-by: Ben Ye <benye@amazon.com>
* rebase main
Signed-off-by: Ben Ye <benye@amazon.com>
* fix lint
Signed-off-by: Ben Ye <benye@amazon.com>
* fix unit test
Signed-off-by: Ben Ye <benye@amazon.com>
* address feedbacks; add unit test
Signed-off-by: Ben Ye <benye@amazon.com>
* Apply suggestions from code review
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Update tsdb/compact_test.go
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
---------
Signed-off-by: Ben Ye <benye@amazon.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
The only call we have to LabelValuesFor() has an index.Postings, and we
expand it to pass to this method, which will iterate over the values.
That's a waste of resources: we can iterate on the index.Postings
directly.
If there's any downstream implementation that has a slice of series,
they can always do an index.ListPostings from them: doing that is
cheaper than expanding an abstract index.Postings.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Currently we can only reduce the resolution of exponential native
histograms, so checking the schema for that is slightly more precise
than checking against max schema.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Converted string to standarized form
* Added golang.org/x/text in Go dependencies
* Added test cases for FastRegexMatcher
* Added benchmark for toNormalizedLower
Signed-off-by: RA <ranveeravhad777@gmail.com>
* MemPostings.PostingsForLabelMatching: let mutex go
This changes the `MemPostings.PostingsForLabelMatching` implementation
to stop holding the read mutex while matching the label values.
We've seen that this method can be slow when the matcher is expensive,
that's why we even added a context expiration check.
However, there are critical process that might be waiting on this mutex:
writes (adding new series) and compaction (deleting the
garbage-collected ones), so we should avoid holding it for a long period
of time.
Given that we've copied the values to a slice anyway, there's no need to
hold the lock while matching.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* MemPostings: reduce locking/unlocking
MemPostings.Delete is called from Head.gc(), i.e. it gets the IDs of the
series that have churned.
I'd assume that many label values aren't affected by that churn at all,
so it doesn't make sense to touch the lock while checking them.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It's quite common during the compaction cycle to hold series IDs for
series that aren't in the TSDB head anymore.
We shouldn't fail if that happens, as the caller has no way to figure
out which one of the IDs doesn't exist.
Fixes https://github.com/prometheus/prometheus/issues/14278
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>