* Tests for Mempostings.{Add,Get} data race
* Fix MemPostings.{Add,Get} data race
We can't modify the postings list that are held in MemPostings as they
might already be in use by some readers.
* Modify BenchmarkHeadStripeSeriesCreate to have common labels
If there are no common labels on the series, we don't excercise the
ordering part of MemSeries, as we're just creating slices of one element
for each label value.
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Move writing memSeries lastHistogramValue and lastFloatHistogramValue
after series creation under lock.
The resulting code isn't totally correct in the sense that we're setting
these values before Commit() , so they might be overwritten/rolled back
later.
Also Append of stale sample checks the values without lock, so there's
still a potential race.
The correct solution would be to set these only in Commit() which we
actually do, but then Commit() would also need to process samples in
order and not floats first, then histograms, then float histograms - which
leads to not knowing what stale marker to write for histograms.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Two Appenders race when creating a series with a native histogram
as the memSeries will be common and the lastHistogram field is written
without lock.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
- `float histogram` → `floathistogram`, as it is used in the code.
- Actual link encodings to the code (to find the actual numerical values).
- `<bytes>` → `<data>` for consistency.
Signed-off-by: beorn7 <beorn@grafana.com>
For: #14355
This commit updates Prometheus to adopt stdlib's log/slog package in
favor of go-kit/log. As part of converting to use slog, several other
related changes are required to get prometheus working, including:
- removed unused logging util func `RateLimit()`
- forward ported the util/logging/Deduper logging by implementing a small custom slog.Handler that does the deduping before chaining log calls to the underlying real slog.Logger
- move some of the json file logging functionality to use prom/common package functionality
- refactored some of the new json file logging for scraping
- changes to promql.QueryLogger interface to swap out logging methods for relevant slog sugar wrappers
- updated lots of tests that used/replicated custom logging functionality, attempting to keep the logical goal of the tests consistent after the transition
- added a healthy amount of `if logger == nil { $makeLogger }` type conditional checks amongst various functions where none were provided -- old code that used the go-kit/log.Logger interface had several places where there were nil references when trying to use functions like `With()` to add keyvals on the new *slog.Logger type
Signed-off-by: TJ Hoplock <t.hoplock@gmail.com>
Because we are reimplementing the `IndexReader` to fetch in-order and
out-of-order chunks together, we must reproduce the behaviour of
`Head.indexRange()`, which floors the minimum time queried at `head.MinTime()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The specialized version of sample add to the ring:
func addH(s hSample, buf []hSample, r *sampleRing) []hSample
func addFH(s fhSample, buf []fhSample, r *sampleRing) []fhSample
already correctly copy histogram samples from the reused hReader, fhReader
buffers, but the generic version does not. This means that the
data is overwritten on the next read if the sample ring has seen histogram
and float samples at the same time and switched to generic mode.
The `genericAdd` function (which was commented anyway) is by now quite
different from the specialized functions so that this commit deletes
it.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
We are still seeing lock contention on MemPostings.mtx, and MemPostings.Delete() is by far the most expensive operation on that mutex.
This adds parallelism to that method, trying to reduce the amount of time we spend with the mutex held.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
If the query overlaps the range currently undergoing compaction, we
should only fetch chunks up to that time. Need to store that min time
in `HeadAndOOOIndexReader`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
If the query overlaps the range currently undergoing compaction, we
should only fetch chunks up to that time. Need to store that min time
in `HeadAndOOOIndexReader`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Instead of a 2-bit write followed by a 14-bit write, do two 8-bit
writes, which goes much faster since it avoids looping.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Benchmarks must do the same work N times.
Run 3 cases, where the values are constant, vary a bit, and vary a lot.
Also aim for 120 samples same as TSDB default.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Shortcut for `.*` matches newlines as well.
Add preamble change ^(?s:
Add test
dotAll flag por al regex
Add and fix regex tests
Signed-off-by: Mario Fernandez <mariofer@redhat.com>
Several regexps were coded like `"^.*$"`, which is an unnatural
formulation nobody is likely to use. Inside `NewMatcher`, `^` and `$`
are added anyway, which makes the form in the benchmark redundant.
It even printed it out in the expected way.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* tsdb: mmapCurrentOOOHeadChunk prepare for multiple ooo chunks
Currently float samples can only create a single ooo head chunk, but
native histograms can result in multiple due to counter resets, etc.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* tsdb: getOOOSeriesChunks prepare for multiple ooo chunks
Currently float samples can only create a single ooo head chunk, but
native histograms can result in multiple due to counter resets, etc.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
---------
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Check if headQuerier is nil before trying to use it.
* TestQueryOOOHeadDuringTruncate: unit test to check query during truncate
Regression test for #14822
* Simulate race between query and Compact()
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
promql, tsdb (histograms): Do not re-use spans between histograms
When multiple points exist with the same native histogram schemas they
share their spans.
This causes a problem when a native histogram (NH) schema is modified (for example, during
a Sum) then the other NH's with the same spans are also modified. As such,
we should create a new Span for each NH. This will ensure NH's interfaces
are safe to use without considering the effect on other histograms.
At the moment this doesn't present itself as a problem because in all
aggregations and functions operating on native histograms they are copied
by the promql query engine first.
Signed-off-by: Joshua Hesketh <josh@nitrotech.org>
---------
Signed-off-by: Joshua Hesketh <josh@nitrotech.org>
This was part of #14525 which was reverted.
I still think that having this benchmark committed in to the repo is
useful.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Starts its index from 0 , but users call Next() before first sample
so it needs to start from -1
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
`getOOOSeriesChunks` was already finding sets of overlapping chunks; we
store those in a `multiMeta` struct so that `ChunkOrIterable` can
reconstruct an `Iterable` easily and predictably.
We no longer need a `MergeOOO` flag to indicate that this Meta should
be merged with other ones; this is explicit in the `multiMeta` structure.
We also no longer need `chunkMetaAndChunkDiskMapperRef`.
Add `wrapOOOHeadChunk` to defeat `chunkenc.Pool` - chunks are reset
during compaction, but if we wrap them (like `safeHeadChunk` was doing
then this is skipped) .
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Go's built-in append() grows larger slices with factor 1.3, which means we do a lot more allocating and copying for larger postings.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Case 1: OOO in-memory head chunk overlaps with first mmaped in-order chunk.
Query: |----------------------------------------------------------------|
InO: |------mmap---------------||---------mem----------------------|
OOO: |-----mem-----------|
This triggers ChunkOrIterableWithCopy not including OOO head chunks bug.
Similar to #14693 however testing the end of the interval doesn't
trigger the problem because there the in-order head chunk will be
trimmed with a tombstone, causing the code to switch to ChunkOrIterable
which was fixed.
See a36d1a8a92/tsdb/querier.go (L646)
where len(p.bufIter.Intervals) will be non zero, because it includes the
tombstone to trim the result to the query max time.
Thus a new test is added to check the overlap at the beginning of the
interval that has a separate chunk, which does not need trimming.
Note: same test doesn't fail for sample querier in Test_Querier_OOOQuery
as that doesn't use copy, that is copyHeadChunk is false in the if
condition above.
Case 2:
OOO mmaped head chunk overlaps with first mmaped in-order chunk.
Query: |----------------------------------------------------------------|
InO: |------mmap---------------||---------mem----------------------|
OOO: |-----mmap-----------| |--mem--|
In this case the meta contains the reference of the in-order chunk and
no indication that a merge is needed with the OOO mmaped chunk.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Several things done here:
- Set `max-issues-per-linter` to 0 so that we actually see all linter
warnings and not just 50 per linter. (As we also set
`max-same-issues` to 0, I assume this was the intention from the
beginning.)
- Stop using the golangci-lint default excludes (by setting
`exclude-use-default: false`. Those are too generous and don't match
our style conventions. (I have re-added some of the excludes
explicitly in this commit. See below.)
- Re-add the `errcheck` exclusion we have used so far via the
defaults.
- Exclude the signature requirement `govet` has for `Seek` methods
because we use non-standard `Seek` methods a lot. (But we keep other
requirements, while the default excludes completely disabled the
check for common method segnatures.)
- Exclude warnings about missing doc comments on exported symbols. (We
used to be pretty adamant about doc comments, but stopped that at
some point in the past. By now, we have about 500 missing doc
comments. We may consider reintroducing this check, but that's
outside of the scope of this commit. The default excludes of
golangci-lint essentially ignore doc comments completely.)
- By stop using the default excludes, we now get warnings back on
malformed doc comments. That's the most impactful change in this
commit. It does not enforce doc comments (again), but _if_ there is
a doc comment, it has to have the recommended form. (Most of the
changes in this commit are fixing this form.)
- Improve wording/spelling of some comments in .golangci.yml, and
remove an outdated comment.
- Leave `package-comments` inactive, but add a TODO asking if we
should change that.
- Add a new sub-linter `comment-spacings` (and fix corresponding
comments), which avoids missing spaces after the leading `//`.
Signed-off-by: beorn7 <beorn@grafana.com>
* tsdb: Unit test query overlapping in order and ooo head
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* TSDB: Merge overlapping head chunk
The basic idea is that getOOOSeriesChunks can populate Meta.Chunk, but since
it only returns one Meta per overlapping time-slot, that pointer may end up in a
Meta with a head-chunk ID. So we need HeadAndOOOChunkReader.ChunkOrIterable()
to call mergedChunks in that case.
Previously, mergedChunks was checking that meta.Ref was a valid OOO chunk reference,
but it never actually uses that reference; it just finds all chunks overlapping in time.
So we can delete that code.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
In `mmapCurrentOOOHeadChunk`, check if the number is at the maximum and
drop the data with an error log. This is not expected to happen as the
maximum is over 8 million; that's 8 years of 1 sample every second.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Re-enable check in `createHeadWithOOOSamples` which wasn't really broken.
* Move code making `Block` into a `Queryable` into test file.
* Make `getSeriesChunks` return a slice (renamed `appendSeriesChunks`).
* Rename `oooMergedChunks` to `mergedChunks`.
* Improve comment on `ChunkOrIterableWithCopy`.
* Name return values from unpackHeadChunkRef.
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Use headIndexReader instead.
OOOCompactionHeadIndexReader needs to be expanded slightly, because it previously delegated to OOOHeadIndexReader.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Just query via `HeadAndOOOQuerier`, which will skip series where no
in-order chunks are in range.
Now we don't need `OOORangeHead`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Add `HeadAndOOOQuerier` which iterates just once over series, then
where necessary merges chunks from in-order and out-of-order lists.
Add a ChunkQuerier for in-order and ooo together
Add copy-last-chunk behaviour to HeadAndOOOChunkReader
Out-of-order chunk IDs are distinguished from in-order by setting bit 23.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Refactor existing BenchmarkQuerierSelect to provide the set-up.
Note that Head queries now run faster because they use a RangeHead.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Rename a variable.
Add parameters to memSeries.insert function.
No effect on how float samples are handled.
Related to #14546
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Discovered while working on #14546 OOO native histograms.
Not triggered on main before #14546 as the code path is unused.
There was a bug where the min time of a chunk was adjusted even
if it was only recoded and not completely new.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
avoid simultaneous compactions and reduce stress on shared resources.
This is enabled via `--enable-feature=delayed-compaction`.
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
* Fix appendable: check whether last val was a histogram
When appending a float, we were checking whether lastValue was equal to
current value, but we didn't check whether last value was a float value.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>