Commit graph

74 commits

Author SHA1 Message Date
TJ Hoplock 6ebfbd2d54 chore!: adopt log/slog, remove go-kit/log
For: #14355

This commit updates Prometheus to adopt stdlib's log/slog package in
favor of go-kit/log. As part of converting to use slog, several other
related changes are required to get prometheus working, including:
- removed unused logging util func `RateLimit()`
- forward ported the util/logging/Deduper logging by implementing a small custom slog.Handler that does the deduping before chaining log calls to the underlying real slog.Logger
- move some of the json file logging functionality to use prom/common package functionality
- refactored some of the new json file logging for scraping
- changes to promql.QueryLogger interface to swap out logging methods for relevant slog sugar wrappers
- updated lots of tests that used/replicated custom logging functionality, attempting to keep the logical goal of the tests consistent after the transition
- added a healthy amount of `if logger == nil { $makeLogger }` type conditional checks amongst various functions where none were provided -- old code that used the go-kit/log.Logger interface had several places where there were nil references when trying to use functions like `With()` to add keyvals on the new *slog.Logger type

Signed-off-by: TJ Hoplock <t.hoplock@gmail.com>
2024-10-07 15:58:50 -04:00
Bryan Boreham cde42f30e9 TSDB: streamline reading of overlapping head chunks
`getOOOSeriesChunks` was already finding sets of overlapping chunks; we
store those in a `multiMeta` struct so that `ChunkOrIterable` can
reconstruct an `Iterable` easily and predictably.

We no longer need a `MergeOOO` flag to indicate that this Meta should
be merged with other ones; this is explicit in the `multiMeta` structure.

We also no longer need `chunkMetaAndChunkDiskMapperRef`.

Add `wrapOOOHeadChunk` to defeat `chunkenc.Pool` - chunks are reset
during compaction, but if we wrap them (like `safeHeadChunk` was doing
then this is skipped) .

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-29 10:57:29 +01:00
Bryan Boreham 838e49e7b8 [REFACTOR] TSDB: move chunkFromSeries from headChunkReader to head
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-29 10:51:48 +01:00
Björn Rabenstein 1d6e0071b7
Merge pull request #14751 from riskrole/main
chore: fix some comments
2024-08-28 16:38:39 +02:00
riskrole 406bf775aa chore: fix some comments
Signed-off-by: riskrole <yuhang@before.tech>
2024-08-28 11:26:57 +08:00
Marco Pracucci ef649d5968
Revert " Store mmMaxTime in same field as seriesShard"
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-08-26 08:56:16 +02:00
Bryan Boreham 9a74d53935
[BUGFIX] TSDB: Fix query overlapping in-order and ooo head (#14693)
* tsdb: Unit test query overlapping in order and ooo head

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>

* TSDB: Merge overlapping head chunk

The basic idea is that getOOOSeriesChunks can populate Meta.Chunk, but since
it only returns one Meta per overlapping time-slot, that pointer may end up in a
Meta with a head-chunk ID. So we need HeadAndOOOChunkReader.ChunkOrIterable()
to call mergedChunks in that case.

Previously, mergedChunks was checking that meta.Ref was a valid OOO chunk reference,
but it never actually uses that reference; it just finds all chunks overlapping in time.
So we can delete that code.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2024-08-21 14:24:20 +01:00
Bryan Boreham 9135da1e4f TSDB: Review feedback
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Re-enable check in `createHeadWithOOOSamples` which wasn't really broken.
* Move code making `Block` into a `Queryable` into test file.
* Make `getSeriesChunks` return a slice (renamed `appendSeriesChunks`).
* Rename `oooMergedChunks` to `mergedChunks`.
* Improve comment on `ChunkOrIterableWithCopy`.
* Name return values from unpackHeadChunkRef.

Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-14 13:41:44 +01:00
Bryan Boreham e04d137649 [PERF] TSDB: Query head and ooo-head together
Add `HeadAndOOOQuerier` which iterates just once over series, then
where necessary merges chunks from in-order and out-of-order lists.

Add a ChunkQuerier for in-order and ooo together

Add copy-last-chunk behaviour to HeadAndOOOChunkReader

Out-of-order chunk IDs are distinguished from in-order by setting bit 23.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-14 11:19:02 +01:00
Bryan Boreham 7e24844d08 Refactor: extract headChunkReader.chunkFromSeries()
For when you have a series locked already.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-14 11:19:02 +01:00
Bryan Boreham c75c8f8329 Refactoring: extract getSeriesChunks
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-14 11:19:02 +01:00
Bryan Boreham 80adc5baf4 Merge remote-tracking branch 'origin/main' into merge-2.54-to-main 2024-08-06 09:19:55 +01:00
Bryan Boreham 015638c4b6 [BUGFIX] TSDB: Exclude OOO chunks mapped after compaction starts
Otherwise the writer can end up with invalid chunks.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-05 10:35:34 +01:00
Oleg Zaytsev d8e1b6bdfd
Store mmMaxTime in same field as seriesShard
We don't use seriesShard during DB initialization, so we can use the
same 8 bytes to store mmMaxTime, and save those during the rest of the
lifetime of the database.

This doesn't affect CPU performance.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2024-07-30 10:20:29 +02:00
Bryan Boreham 709c5d6fc3
TSDB: Lock around access to labels in head under -tags dedupelabels (#14322)
* TSDB: Document what needs locking in memSeries

* TSDB: Lock around access to series labels

So we can modify them to reset the symbol-table.

* TSDB: Make label locking conditional on build tag

---------

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-07-05 10:11:32 +01:00
Bryan Boreham 134e8dc7af
TSDB: Simplify OOO Select by copying the head chunk (#14396)
Instead of carrying around extra fields in `Meta` structs which let us
approximate what was in the chunk at the time, take a copy of the chunk.

This simplifies lots of code, and lets us correct a couple of tests which
were embedding the wrong answer.

We can also remove boundedIterator, which was only used to constrain
the OOO head chunk.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-07-03 15:08:07 +01:00
Oleg Zaytsev 64a9abb8be
Change LabelValuesFor() to accept index.Postings (#14280)
The only call we have to LabelValuesFor() has an index.Postings, and we
expand it to pass to this method, which will iterate over the values.

That's a waste of resources: we can iterate on the index.Postings
directly.

If there's any downstream implementation that has a slice of series,
they can always do an index.ListPostings from them: doing that is
cheaper than expanding an abstract index.Postings.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2024-06-11 15:36:46 +02:00
Oleg Zaytsev d0d361da53
headIndexReader.LabelNamesFor: skip not found series
It's quite common during the compaction cycle to hold series IDs for
series that aren't in the TSDB head anymore.

We shouldn't fail if that happens, as the caller has no way to figure
out which one of the IDs doesn't exist.

Fixes https://github.com/prometheus/prometheus/issues/14278

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2024-06-07 16:09:53 +02:00
Arve Knudsen 5c4310aa37
[ENHANCEMENT] TSDB: Optimize querying with regexp matchers
Add method `PostingsForLabelMatching` to `tsdb.IndexReader`, to obtain postings for labels with a certain name and values accepted by a provided callback, and use it from `tsdb.PostingsForMatchers`.
The intention is to optimize regexp matcher paths, especially not having to load all label values before matching on them.

Plus tests, and refactor some `tsdb/index.Reader` methods.

Benchmarking shows memory reduction up to ~100%, and speedup of up to ~50%.

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2024-05-09 10:55:30 +01:00
Nicolas Takashi 8125634086
[refactor] moving mergedOOOChunks Iterator (#13881)
Signed-off-by: Nicolas Takashi <nicolas.tcs@hotmail.com>
2024-04-03 10:14:34 +02:00
Nicolas Takashi 0b762db154
[refactor] moving mergedOOOChunks to ooo_head_read
Signed-off-by: Nicolas Takashi <nicolas.tcs@hotmail.com>
2024-03-29 23:33:15 +00:00
machine424 f477e0539a
Move from golang.org/x/exp/slices into slices now that we only support Go >= 1.21
Prevent adding back golang.org/x/exp/slices.

Signed-off-by: machine424 <ayoubmrini424@gmail.com>
2024-02-28 14:54:53 +01:00
Marco Pracucci 501bc6419e
Add ShardedPostings() support to TSDB (#10421)
This PR is a reference implementation of the proposal described in #10420.

In addition to what described in #10420, in this PR I've introduced labels.StableHash(). The idea is to offer an hashing function which doesn't change over time, and that's used by query sharding in order to get a stable behaviour over time. The implementation of labels.StableHash() is the hashing function used by Prometheus before stringlabels, and what's used by Grafana Mimir for query sharding (because built before stringlabels was a thing).

Follow up work
As mentioned in #10420, if this PR is accepted I'm also open to upload another foundamental piece used by Grafana Mimir query sharding to accelerate the query execution: an optional, configurable and fast in-memory cache for the series hashes.

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-01-29 11:57:27 +00:00
Bryan Boreham 3f30ad3cc2
Merge pull request #13015 from bboreham/smaller-txring
tsdb: make transaction isolation data structures smaller
2024-01-25 10:48:15 +00:00
Matthieu MOREL 8f6cf3aabb tsdb: use Go standard errors
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-12-11 12:18:54 +00:00
Fiona Liao 5bee0cfce2
Change ChunkReader.Chunk() to ChunkOrIterable()
The ChunkReader interface's Chunk() has been changed to ChunkOrIterable(). 

This is a precursor to OOO native histogram support - with OOO native histograms, the chunks.Meta passed to Chunk() can result in multiple chunks being returned rather than just a single chunk (e.g. if oooMergedChunk has a counter reset in the middle). 

To support this, ChunkOrIterable() requires either a single chunk or an iterable to be returned. If an iterable is returned, the caller has the responsibility of converting the samples from the iterable into possibly multiple chunks. The OOOHeadChunkReader now returns an iterable rather than a chunk to prepare for the native histograms case. Also as a beneficial side effect, oooMergedChunk and boundedChunk has been simplified as they only need to implement the Iterable interface now, not the full Chunk interface.

---------

Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
2023-11-28 11:14:29 +01:00
Oleksandr Redko fa90ca46e5 ci(lint): enable godot; append dot at the end of comments
Signed-off-by: Oleksandr Redko <Oleksandr_Redko@epam.com>
2023-10-31 19:53:38 +02:00
Bryan Boreham 6fe8217ce4 tsdb: shrink txRing with smaller integers
4 billion active transactions ought to be enough for anyone.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-10-21 12:44:34 +00:00
Goutham Veeramachaneni 86729d4d7b
Update exp package (#12650) 2023-09-21 22:53:51 +02:00
Arve Knudsen 156222cc50
Add context argument to LabelQuerier.LabelValues (#12665)
Add context argument to LabelQuerier.LabelValues and
LabelQuerier.SortedLabelValues.

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-14 16:02:04 +02:00
Arve Knudsen a964349e97
Add context argument to LabelQuerier.LabelNames (#12666)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-14 10:39:51 +02:00
Arve Knudsen 4451ba10b4
Add context argument to IndexReader.Postings (#12667)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-13 17:45:06 +02:00
Łukasz Mierzwa 3c80963e81
Use a linked list for memSeries.headChunk (#11818)
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.

Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.

To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-07-31 11:10:24 +02:00
Julien Pivotto bf5bf1a4b3 TSDB: Remove usused import of sort
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-07-11 14:29:31 +02:00
Julien Pivotto 0f85e4f41d
Merge pull request #12539 from bboreham/slices-sorts
Replace sort.Slice with faster slices.SortFunc
2023-07-11 13:09:02 +02:00
Bryan Boreham ce153e3fff Replace sort.Sort with faster slices.SortFunc
The generic version is more efficient.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-07-10 09:43:45 +00:00
Bryan Boreham 5255bf06ad Replace sort.Slice with faster slices.SortFunc
The generic version is more efficient.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-07-02 22:17:08 +00:00
George Krajcsovits 92d6980360
Fix populateWithDelChunkSeriesIterator and gauge histograms (#12330)
Use AppendableGauge to detect corrupt chunk with gauge histograms.
Detect if first sample is a gauge but the chunk is not set up to contain
gauge histograms.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
2023-05-19 10:24:06 +02:00
beorn7 5b53aa1108 style: Replace else if cascades with switch
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.

The exceptions that I have found in our codebase are just these two:

* The `if else` is followed by an additional statement before the next
  condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
  used. In this case, using `switch` would require tagging the `for`
  loop, which probably tips the balance.

Why are `switch` statements more readable?

For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.

I'm sure the aforemention wise coders can list even more reasons.

In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:22:31 +02:00
Ganesh Vernekar 1b7d973f14
tsdb: Fix a comment in tsdb/head_read.go
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-03-21 15:15:36 +05:30
Ganesh Vernekar 0c0c2af7f5
Do not re-encode head chunk in ChunkQuerier
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-03-15 17:58:01 +05:30
Ganesh Vernekar d504c950a2
Remove unnecessary chunk fetch in Head queries
`safeChunk` is only obtained from the `headChunkReader.Chunk` call where
the chunk is already fetched and stored with the `safeChunk`. So, when
getting the iterator for the `safeChunk`, we don't need to get the chunk again.

Also removed a couple of unnecessary fields from `safeChunk` as a part of this.

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-02-22 12:21:12 +05:30
Ganesh Vernekar cb2be6e62f
Merge pull request #11779 from codesome/memseries-ooo
tsdb: Only initialise out-of-order fields when required
2023-01-16 10:58:05 +05:30
Ganesh Vernekar 38fa151a7c
tsdb: Only initialise out-of-order fields when required
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-01-12 20:29:16 +05:30
Oleg Zaytsev de93a279a0
Shortcut postings for matchers when empty postings are selected (#11813)
* Add more benchmark cases
* Add shortcuts for empty postings

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2023-01-10 15:21:49 +05:30
Bryan Boreham 10b27dfb84 Simplify IndexReader.Series interface
Instead of passing in a `ScratchBuilder` and `Labels`, just pass the
builder and the caller can extract labels from it. In many cases the
caller didn't use the Labels value anyway.

Now in `Labels.ScratchBuilder` we need a slightly different API: one
to assign what will be the result, instead of overwriting some other
`Labels`. This is safer and easier to reason about.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00
Bryan Boreham 543c318ec2 Update package tsdb for new labels.Labels type
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00
Bryan Boreham d3d96ec887 tsdb/index: use ScratchBuilder to create Labels
This necessitates a change to the `tsdb.IndexReader` interface:
`index.Reader` is used from multiple goroutines concurrently, so we
can't have state in it.

We do retain a `ScratchBuilder` in `blockBaseSeriesSet` which is
iterator-like.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00
Bryan Boreham 463f5cafdd storage: re-use iterators to save garbage
Re-use previous memory if it is already of the correct type.

In `NewListSeries` we hoist the conversion to an interface value out
so it only allocates once.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-15 18:32:45 +00:00
Signed-off-by: Jesus Vazquez 3362bf6d79
Fix merge conflicts
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-10-11 22:53:37 +05:30