Commit graph

177 commits

Author SHA1 Message Date
Jeanette Tan 6341ba7374 Merge remote-tracking branch 'upstream/main' into sync-upstream-20231026 2023-10-26 22:18:24 +08:00
Márcio Carôso dff1c395f6
Expose --storage.tsdb.retention.time in metric prometheus_tsdb_retention_limit_seconds (#12986)
* Expose --storage.tsdb.retention.time in a metric

Signed-off-by: Marcio Caroso <msscaroso@gmail.com>

---------

Signed-off-by: Marcio Caroso <msscaroso@gmail.com>
2023-10-24 13:34:42 +02:00
Arve Knudsen a889bf6ad2 DB.UnorderedChunkQuerier: Remove unused ctx argument
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-10-18 18:23:32 +02:00
Jeanette Tan f898005c69 Merge remote-tracking branch 'upstream/main' into sync-upstream-20231018 2023-10-18 11:43:51 +08:00
George Krajcsovits 7d7b9eacff
Fix int32 overflow issues (#12978)
On a 32 bit architecture the size of int is 32 bits. Thus converting from
int64, uint64 can overflow it and flip the sign.

Try for yourself in playground:
package main

import "fmt"

func main() {
	x := int64(0x1F0000001)
	y := int64(1)
	z := int32(x - y) // numerically this is 0x1F0000000
	fmt.Printf("%v\n", z)
}

Prints -268435456 as if x was smaller.

Followup to #12650

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-10-16 16:23:26 +02:00
Ganesh Vernekar f5913266a1
Additionally wrap WBL replay error (#12406)
* Additionally wrap WBL replay error

Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.

This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
2023-10-13 14:21:35 +02:00
Arve Knudsen 26d07ee8d3 tsdb: Avoid potential overflow in SortFunc
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-10-10 09:49:10 +02:00
Arve Knudsen 35ab75918a Merge remote-tracking branch 'prometheus/main' into arve/upgrade-exp
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-10-06 16:11:40 +02:00
Marco Pracucci 3c68ce252e
Add PostingsForMatchers cache size by bytes support
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2023-09-27 15:25:39 +02:00
Goutham Veeramachaneni 86729d4d7b
Update exp package (#12650) 2023-09-21 22:53:51 +02:00
Arve Knudsen e48d4e5835 Merge remote-tracking branch 'prometheus/main' into chore/sync-prometheus
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-18 09:29:42 +02:00
Arve Knudsen 4451ba10b4
Add context argument to IndexReader.Postings (#12667)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-13 17:45:06 +02:00
Arve Knudsen 6ef9ed0bc3
Add context argument to DB.Delete (#12834)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-13 15:43:06 +02:00
Arve Knudsen 6daee89e5f
Add context argument to Querier.Select (#12660)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-12 12:37:38 +02:00
Dimitar Dimitrov 77ac7ad40a
Merge remote-tracking branch 'upstream/main' into dimitar/pull-upstream 2023-09-05 16:19:00 +02:00
Bryan Boreham 0d283effa8 promql: force mmap of head chunks in BenchmarkRangeQuery
Otherwise we have a highly unusual situation of over 100 chunks
in the headChunks list of each series, which heavily skews
performance.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-26 09:40:59 +00:00
Julien Pivotto e3fabd5fdf
Merge pull request #12664 from prometheus/superq/cleanup_chunk_snapshots
Cleanup temporary chunk snapshot dirs
2023-08-08 13:02:39 +02:00
SuperQ 8d38d59fc5
Cleanup temporary chunk snapshot dirs
Simlar to cleanup of WAL files on startup, cleanup temporary
chunk_snapshot dirs. This prevents storage space leaks due to terminated
snapshots on shutdown.

Signed-off-by: SuperQ <superq@gmail.com>
2023-08-08 09:43:48 +02:00
Łukasz Mierzwa 3c80963e81
Use a linked list for memSeries.headChunk (#11818)
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.

Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.

To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-07-31 11:10:24 +02:00
Jeanette Tan 8035c04624 Merge remote-tracking branch 'upstream/main'
Minor conflicts:
rules/manager.go
tsdb/compact.go
tsdb/db.go
go.mod
2023-07-19 21:40:27 +08:00
Justin Lei 32d87282ad
Add Zstandard compression option for wlog (#11666)
Snappy remains as the default compression but there is now a flag to switch 
the compression algorithm.

Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-07-11 14:57:57 +02:00
Nidhey Nitin Indurkar e970f70ced Feat: Get block by id directly on promtool analyze & get latest block if ID not provided (#12031)
* feat: analyze latest block or block by ID in CLI (promtool)

Signed-off-by: nidhey27 <nidhey.indurkar@infracloud.io>

* address remarks

Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>

* address latest review comments

Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>

---------

Signed-off-by: nidhey27 <nidhey.indurkar@infracloud.io>
Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>
2023-07-04 13:39:01 +00:00
Bryan Boreham 5255bf06ad Replace sort.Slice with faster slices.SortFunc
The generic version is more efficient.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-07-02 22:17:08 +00:00
Nidhey Nitin Indurkar a8772a4178
Feat: Get block by id directly on promtool analyze & get latest block if ID not provided (#12031)
* feat: analyze latest block or block by ID in CLI (promtool)

Signed-off-by: nidhey27 <nidhey.indurkar@infracloud.io>

* address remarks

Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>

* address latest review comments

Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>

---------

Signed-off-by: nidhey27 <nidhey.indurkar@infracloud.io>
Signed-off-by: nidhey60@gmail.com <nidhey.indurkar@infracloud.io>
2023-06-01 17:13:09 +05:30
Jeanette Tan dd172440e5 Merge remote-tracking branch 'mine/fix-default-samples-per-chunk' into zenador/sync-upstream-22-may-2023 2023-05-24 19:32:04 +08:00
zenador 37e5249e33
Use DefaultSamplesPerChunk in tsdb (#12387)
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-05-24 13:00:21 +02:00
Jeanette Tan 02e113d03f Use DefaultSamplesPerChunk in tsdb
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-05-24 16:44:53 +08:00
Jeanette Tan 1be0816b46 Merge remote-tracking branch 'upstream/main' 2023-05-23 00:20:36 +08:00
Callum Styan 0d2108ad79
[tsdb] re-implement WAL watcher to read via a "notification" channel (#11949)
* WIP implement WAL watcher reading via notifications over a channel from
the TSDB code

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Notify via head appenders Commit (finished all WAL logging) rather than
on each WAL Log call

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix misspelled Notify plus add a metric for dropped Write notifications

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Update tests to handle new notification pattern

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* this test maybe needs more time on windows?

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* does this test need more time on windows as well?

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* read timeout is already a time.Duration

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* remove mistakenly commited benchmark data files

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* address some review feedback

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* fix missed changes from previous commit

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix issues from wrapper function

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* try fixing race condition in test by allowing tests to overwrite the
read ticker timeout instead of calling the Notify function

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* fix linting

Signed-off-by: Callum Styan <callumstyan@gmail.com>

---------

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2023-05-15 12:31:49 -07:00
Björn Rabenstein 37fe9b89dc
Merge pull request #12055 from leizor/leizor/prometheus/issues/12009
Adjust samplesPerChunk from 120 to 220
2023-05-10 14:45:12 +02:00
György Krajcsovits 65b8edbed4 Merge remote-tracking branch 'upstream/main' into sync-upstream-28-apr-2023 2023-04-28 18:04:02 +02:00
Jeanette Tan 0fccba0db9 Merge remote-tracking branch 'upstream/main' 2023-04-26 21:25:21 +08:00
cui fliter 276ca6a883 fix some comments
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-04-25 14:19:16 +08:00
Matthieu MOREL bae9a21200
Merge branch 'main' into linter/nilerr
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-19 19:56:39 +02:00
beorn7 5b53aa1108 style: Replace else if cascades with switch
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.

The exceptions that I have found in our codebase are just these two:

* The `if else` is followed by an additional statement before the next
  condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
  used. In this case, using `switch` would require tagging the `for`
  loop, which probably tips the balance.

Why are `switch` statements more readable?

For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.

I'm sure the aforemention wise coders can list even more reasons.

In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:22:31 +02:00
Justin Lei 052993414a Add storage.tsdb.samples-per-chunk flag
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-04-13 15:59:49 -07:00
Matthieu MOREL fb3eb21230 enable gocritic, unconvert and unused linters
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-13 19:20:22 +00:00
Marco Pracucci 74916ab06e
Merge pull request #457 from grafana/codesome/sync-prom
Sync with upstream Prometheus
2023-03-22 10:25:40 +01:00
Marco Pracucci 330e9b69af
Allow to configure compacted blocks postings for matchers cache
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2023-03-22 06:40:11 +01:00
Ganesh Vernekar 41649ceb1b
Merge remote-tracking branch 'upstream/main' into codesome/sync-prom
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-03-22 08:35:08 +05:30
Vernon Miller ca0abf26c5
Adds an affirmative log message for successful WAL repair (#12135)
* Adds an affirmative log message for successful WAL repair

Signed-off-by: Vernon Miller <vernon.miller@grafana.com>
Signed-off-by: Vernon Miller <96601789+aldernero@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-03-21 19:33:43 +05:30
Yuri Nikolic c7d730f549 Fixing conflicts with commit c9b85afd93 2023-03-08 17:27:44 +01:00
Đurica Yuri Nikolić c9b85afd93
Making the number of CPUs used for WAL replay configurable (#12066)
Adds `WALReplayConcurrency` as an option on tsdb `Options` and `HeadOptions`.
If it is not set or set <=0, then `GOMAXPROCS` is used, which matches the previous behaviour.

Signed-off-by: Yuri Nikolic <durica.nikolic@grafana.com>
2023-03-07 16:41:33 +00:00
Marco Pracucci 950c177c72
Hardcode the labels stable hash function instead of taking it as an option
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2023-01-30 14:21:18 +01:00
Bryan Boreham 0bc8438f38 Rename WithCache functions as WithOptions
Where they now have 2 or more extra parameters.
2023-01-12 11:41:22 +00:00
Bryan Boreham 1aaabfee2d tsdb: make sharding function a parameter
Instead of relying on `labels.Hash()`, which may change, have the
caller pass in a shard function if required.

For most purposes `tsdb.Options.ShardFunc` is used, but the compactor
may be created independently so `NewLeveledCompactorWithChunkSize` also
takes a shard function parameter.

Regular Prometheus, which does not use block sharding, will have this
parameter as nil.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-01-12 11:41:22 +00:00
György Krajcsovits 103c4fd289 Merge remote-tracking branch 'upstream/main' into main
# Conflicts:
#	.github/workflows/ci.yml
#	tsdb/block.go
#	tsdb/compact.go
#	tsdb/compact_test.go
#	tsdb/head_read.go
#	tsdb/index/index.go
#	tsdb/ooo_head_read.go
#	tsdb/querier_test.go
2023-01-08 14:55:44 +01:00
Arve Knudsen 41dc53ea16 tsdb: Fix typo in comment
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2022-12-29 15:41:20 +01:00
Oleg Zaytsev d23859dee2
Allow forcing usage of PostingsForMatchersCache
When out-of-order is enabled, queries go through both Head and OOOHead,
and they both execute the same PostingsForMatchers call, as memSeries
are shared for both.

In some cases these calls can be heavy, and also frequent. We can
deduplicate those calls by using the PostingsForMatchers cache that we
already use for query sharding.

The usage of this cache can skip a newly appended series in the results
for the duration of the ttl.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2022-12-28 13:44:10 +01:00
Bryan Boreham 543c318ec2 Update package tsdb for new labels.Labels type
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00