Commit graph

501 commits

Author SHA1 Message Date
Bryan Boreham 9b31adc4e8
tsdb: fix up sort call with faster slices.Sort (#11380)
This call was added by PR #11075 merged before #11318 which changed all
similar calls to `sort.Sort` into a faster one.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-10-01 12:55:40 -04:00
Bryan Boreham 3330d85ba8
Replace sort.Strings and sort.Ints with faster slices.Sort (#11318)
Use new experimental package `golang.org/x/exp/slices`.

slices.Sort works on values that are directly comparable, like ints,
so avoids the overhad of an interface call to `.Less()`.

Left tests unchanged, because they don't need the speed and it may be
a cross-check that slices.Sort gives the same answer.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-30 20:03:56 +05:30
Bryan Boreham 7f2374b703
tsdb: faster postings sort with generic slices.Sort (#11054)
Use new experimental package `golang.org/x/exp/slices`.

Some of the speedup comes from comparing SeriesRef (which is an int64)
directly rather than through an interface `.Less()` call; some comes
from exp/slices using "pattern-defeating quicksort(pdqsort)".

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-30 20:01:32 +05:30
Ganesh Vernekar 83d738e263
Fix 'invalid magic number 0' bug (#11338)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-28 21:43:58 +05:30
Ganesh Vernekar f34aeefe6e
Allow overlapping blocks by default (#11331)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-28 19:17:54 +05:30
Robert Fratto 448cfda6c1
tsdb/agent: fix validation of default options (#9876)
* tsdb/agent: fix application of defaults

MaxTS was being incorrectly constrained to the truncation interval

* add more tests to check validation

* force MaxWALTime = MinWALTime if min > max

Signed-off-by: Robert Fratto <robertfratto@gmail.com>
2022-09-27 19:41:43 +05:30
Bryan Boreham d166da7b59
tsdb: stop saving a copy of last 4 samples in memSeries (#11296)
* TSDB chunks: remove race between writing and reading

Because the data is stored as a bit-stream, the last byte in the stream
could change if the stream is appended to after an Iterator is obtained.
Copy the last byte when the Iterator is created, so we don't have to
read it later.

Clarify in comments that concurrent Iterator and Appender are allowed,
but the chunk must not be modified while an Iterator is created.
(This was already the case, in order to copy the bstream slice header.)

* TSDB: stop saving last 4 samples in memSeries

This extra copy of the last 4 samples was introduced to avoid a race
condition between reading the last byte of the chunk and writing to it.

But now we have fixed that by having `bstreamReader` copy the last byte,
we don't need to copy the last 4 samples.

This change saves 56 bytes per series, which is very worthwhile when
you have millions or tens of millions of series.

* TSDB: tidy up stopIterator re-use

Previous changes have left this code duplicating some lines; pull
them out to a separate function and tidy up.

* TSDB head_test: stop checking when iterators are wrapped

The behaviour has changed so chunk iterators are only wrapped when
transaction isolation requires them to stop short of the end.
This makes tests fail which are checking the type.

Tests should check the observable behaviour, not the type.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-27 19:32:05 +05:30
Bryan Boreham ff00dee262
tsdb: turn off transaction isolation for head compaction (#11317)
* tsdb: add a basic test for read/write isolation

* tsdb: store the min time with isolationAppender
So that we can see when appending has moved past a certain point in time.

* tsdb: allow RangeHead to have isolation disabled
This will be used when for head compaction.

* tsdb: do head compaction with isolation disabled
This saves a lot of work tracking appends done while compaction is ongoing.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-27 19:31:23 +05:30
Bryan Boreham d0607435a2
tsdb: remove chunkRange and oooCapMax from memSeries (#11288)
* tsdb: remove chunkRange from memSeries

chunkRange is the (oddly-named) configured duration for the head block.

We don't need a copy of this value per series. Pass it down where
required, and remove the copy.

The value in `Head` is only updated in `resetInMemoryState()`, which
also discards all `memSeries`.

* tsdb: remove oooCapMax from memSeries

oooCapMax is the configured maximum capacity for an out-of-order chunk.

Storing it per-series uses extra memory, and has surprising behaviour
if users change the value in config - series created before the change
will keep their old value.

Instead, pass it down where required, and remove the per-series value.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-27 13:52:22 +05:30
Jesus Vazquez c1b669bf9b
Add out-of-order sample support to the TSDB (#11075)
* Introduce out-of-order TSDB support

This implementation is based on this design doc:
https://docs.google.com/document/d/1Kppm7qL9C-BJB1j6yb6-9ObG3AbdZnFUBYPNNWwDBYM/edit?usp=sharing

This commit adds support to accept out-of-order ("OOO") sample into the TSDB
up to a configurable time allowance. If OOO is enabled, overlapping querying
are automatically enabled.

Most of the additions have been borrowed from
https://github.com/grafana/mimir-prometheus/
Here is the list ist of the original commits cherry picked
from mimir-prometheus into this branch:
- 4b2198d7ec
- 2836e5513f
- 00b379c3a5
- ff0dc75758
- a632c73352
- c6f3d4ab33
- 5e8406a1d4
- abde1e0ba1
- e70e769889
- df59320886

Co-authored-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Dieter Plaetinck <dieter@grafana.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* gofumpt files

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Add license header to missing files

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix OOO tests due to existing chunk disk mapper implementation

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix truncate int overflow

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Add Sync method to the WAL and update tests

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* remove useless sync

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Update minOOOTime after truncating Head

* Update minOOOTime after truncating Head

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix lint

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Add a unit test

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Load OutOfOrderTimeWindow only once per appender

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix OOO Head LabelValues and PostingsForMatchers

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix replay of OOO mmap chunks

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Remove unnecessary err check

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Prevent panic with ApplyConfig

Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Run OOO compaction after restart if there is OOO data from WBL

Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Apply Bartek's suggestions

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Refactor OOO compaction

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Address comments and TODOs

- Added a comment explaining why we need the allow overlapping
  compaction toggle
- Clarified TSDBConfig OutOfOrderTimeWindow doc
- Added an owner to all the TODOs in the code

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Run go format

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix remaining review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Change wbl reference when truncating ooo in TestHeadMinOOOTimeUpdate

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix TestWBLAndMmapReplay test failure on windows

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Address most of the feedback

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Refactor the block meta for out of order

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix windows error

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2022-09-20 22:35:50 +05:30
Bryan Boreham af6167df58
WAL loading: don't send empty buffers over chan (#11319)
If some shards did not get any samples mapped, the buffer will be empty
so sending it over the chan to `processWALSamples()` is a waste of time.

This is especially likely now we are checking `minValidTime` before
sending.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-20 19:43:30 +05:30
Bryan Boreham d2701be53a
tsdb: remove chunk pool from memSeries (#11280)
The chunk pool belongs to the head not to the series. Pass it down where
required, and remove the copy of the pointer that `memSeries` was
holding.

`safeChunk` also needs to hold it, because in scenarios where it is used
we don't have a reference to the head. However it was already holding
`chunkDiskMapper` for the same reason, so no big change.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-15 13:22:09 +05:30
Bryan Boreham e49d596fb1
WAL loading: check sample time is valid earlier (#11307)
There's no point splitting a sample onto the right shard and checking
if the series needs to be re-mapped, if we're only going to discard it
once it arrives at `ProcessWALSamples()`. Simply discard it earlier.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-15 12:36:57 +05:30
Ganesh Vernekar 83e11014dd
Remove unnecessary tsdb/tsdbutil/buffer.go (#11302)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-13 19:36:32 +05:30
Bryan Boreham 136f8b0ebb
tsdb: comment reason for isolation tracking reads (#11301)
I find it useful to know why a restriction exists, to check whether that
reason still applies, or in which other places it might apply.

This is based on the note here:
https://github.com/prometheus/prometheus/pull/9270#pullrequestreview-743820956
on the PR where the original comment was added.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-13 11:11:03 +02:00
Bryan Boreham 176fa38e76 tsdb: in tests use labels.FromStrings
Replacing code which assumes the internal structure of `Labels`.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-09 13:34:49 +02:00
Bryan Boreham 0437dd7cee tsdb/wal: in tests use labels.FromStrings
Replacing code which assumes the internal structure of `Labels`.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-09 13:34:49 +02:00
Julien Pivotto ec6c1f17d1
Update dependencies (#11287)
Updating dependencies following CI changes and move to go 1.19

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2022-09-09 13:28:55 +02:00
Julien Pivotto 96d5a32659
Update go to 1.19, set min version to 1.18 (#11279)
* Update go to 1.19, set min version to 1.18

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>

* Update golangci-lint

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2022-09-07 11:30:48 +02:00
Abirdcfly 314aa45c2c
chore: remove duplicate word in comments (#11225)
Signed-off-by: Abirdcfly <fp544037857@gmail.com>

Signed-off-by: Abirdcfly <fp544037857@gmail.com>
2022-08-27 22:21:41 +02:00
Xiaochao Dong 09187fb0cc
Replay WAL concurrently without blocking (#10973)
* Replay WAL concurrently without blocking

Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>

* Resolve review comments

Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>

Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
2022-08-17 19:23:57 +05:30
Łukasz Mierzwa 3196c98bc2
Reduce memSeries memory usage by decoupling metadata (#11152)
Metadata was added recently but doesn't seem to be used much, at least as far as I could identify.
Yet it's part of memSeries struct and so even when empty takes 48 bytes,
which is a lot given that without it memSeries requires 224 bytes.
This change turns it into a pointer on the struct, that get set only when metadata is actually set of given series.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-08-17 15:32:28 +05:30
Xiaochao Dong 1078081aec
Fix race condition when updating lastSeriesID during loading chunk snapshot (#11099)
Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
2022-08-04 13:39:14 +05:30
Bryan Boreham 00ec720c29
tsdb: extract functions to encode and decode labels (#11045)
* tsdb/record: Extract functions to encode and decode labels

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* tsdb: make use of Encode/Decode Labels

Simplify the code by re-using routines from tsdb/record.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-07-26 20:12:00 +05:30
Paschalis Tsilias a0f7c31c26
Fix type byte of WAL metadata records in docs (#11035)
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
2022-07-19 18:22:02 +05:30
Paschalis Tsilias d1122e0743
Introduce TSDB changes for appending metadata to the WAL (#10972)
* Append metadata to the WAL

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Remove extra whitespace; Reword some docstrings and comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use RLock() for hasNewMetadata check

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use single byte for metric type in RefMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Update proposed WAL format for single-byte type metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Implementa MetadataAppender interface for the Agent

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Address first round of review comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Amend description of metadata in wal.md

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Correct key used to retrieve metadata from cache

When we're setting metadata entries in the scrapeCace, we're using the
p.Help(), p.Unit(), p.Type() helpers, which retrieve the series name and
use it as the cache key. When checking for cache entries though, we used
p.Series() as the key, which included the metric name _with_ its labels.
That meant that we were never actually hitting the cache. We're fixing
this by utiling the __name__ internal label for correctly getting the
cache entries after they've been set by setHelp(), setType() or
setUnit().

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Put feature behind a feature flag

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix AppendMetadata docstring

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reorder WAL format document

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Change error message of AppendMetadata; Fix access of s.meta in AppendMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reuse temporary buffer in Metadata encoder

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Only keep latest metadata for each refID during checkpointing

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix test that's referencing decoding metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Avoid creating metadata block if no new metadata are present

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add tests for corrupt metadata block and relevant record type

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix CR comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Extract logic about changing metadata in an anonymous function

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Implement new proposed WAL format and amend relevant tests

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use 'const' for metadata field names

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Apply metadata to head memSeries in Commit, not in AppendMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add docstring and rename extracted helper in scrape.go

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add tests for tsdb-related cases

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linter issues vol1

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linter issues vol2

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix Windows test by closing WAL reader files

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use switch instead of two if statements in metadata decoding

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix review comments around TestMetadata* tests

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add code for replaying WAL; test correctness of in-memory data after a replay

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Remove scrape-loop related code from PR

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Address first round of comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Simplify tests by sorting slices before comparison

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix test to use separate transactions

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Empty out buffer and record slices after encoding latest metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linting issue

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Update calculation for DroppedMetadata metric

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Rename MetadataAppender interface and AppendMetadata method to MetadataUpdater/UpdateMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reuse buffer when encoding latest metadata for each series

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix review comments; Check all returned error values using two helpers

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Simplify use of helpers

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Satisfy linter

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
2022-07-19 10:58:52 +02:00
ZhangJian He 95b7d058ac
Fix markdown syntax in tsdb index.md (#11032)
* Fix markdown syntax in tsdb index.md

Signed-off-by: ZhangJian He <shoothzj@gmail.com>

* Update tsdb/docs/format/index.md

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

Signed-off-by: ZhangJian He <shoothzj@gmail.com>
2022-07-18 04:28:52 -07:00
Levi Harrison 3d538351f6
Recognize exemplar record type in WAL watcher metrics (#11008)
* Add exemplar record case

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* recordType() -> Type.String()

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-18 15:54:11 +05:30
Jesus Vazquez 6cfe44d7fd
WaitUntilIdle optimize idling time (#10878)
Relates to @bboreham optimization in https://github.com/prometheus/prometheus/pull/10859

Bryan did reduce the sleep time improving the deltas on the benchmark by
quite a lot. However I've been working on a similar implementation for
out of order and I noticed that we actually get into this method
thousands of times.

@ywwg had the brilliant idea of not always sleeping before the select
but actually make it a case in the select so we only sleep if we need
to.

The benchmark deltas are amazing

```
❯ benchstat old_implementation.txt new_implementation_using_time_after.txt
name                                                                                                     old time/op  new time/op  delta
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=0,mmappedChunkT=0-8        521ms ±25%   253ms ± 6%  -51.47%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=36,mmappedChunkT=0-8       773ms ± 3%   369ms ±31%  -52.23%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=72,mmappedChunkT=0-8       592ms ±28%   297ms ±28%  -49.80%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=360,mmappedChunkT=0-8      547ms ± 2%  999ms ±187%     ~     (p=0.690 n=5+5)
LoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50,exemplarsPerSeries=0,mmappedChunkT=0-8        11.3s ± 4%    1.3s ±44%  -88.48%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50,exemplarsPerSeries=2,mmappedChunkT=0-8        11.1s ± 1%    1.2s ±20%  -89.08%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=0,mmappedChunkT=0-8        1.24s ± 3%   0.18s ± 7%  -85.76%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=2,mmappedChunkT=0-8        1.24s ± 2%   0.18s ± 5%  -85.24%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=5,mmappedChunkT=0-8        1.23s ± 5%   0.27s ±33%  -77.73%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=24,mmappedChunkT=0-8       1.28s ± 1%   0.36s ± 7%  -71.51%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=0,mmappedChunkT=3800-8    12.1s ± 1%    3.1s ± 6%  -74.33%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=2,mmappedChunkT=3800-8    12.1s ± 1%    3.4s ± 4%  -71.94%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=5,mmappedChunkT=3800-8    12.1s ± 1%    3.8s ±17%  -68.35%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=24,mmappedChunkT=3800-8   12.4s ± 1%    4.0s ±18%  -67.71%  (p=0.008 n=5+5)
```

Benchmarked on Linux
```
goos: linux
goarch: amd64
pkg: github.com/prometheus/prometheus/tsdb
cpu: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
```

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
2022-06-30 15:00:04 +02:00
Julien Pivotto bacd776356
Merge pull request #10907 from damnever/fix/panic
Fix panic if series is not found when deleting series
2022-06-30 11:23:08 +02:00
Peter Štibraný ffc60d8397
Reduce chunk write queue memory usage 2 (#10874)
* Job queue

This PR reimplements chan chunkWriteJob with custom buffered queue that should use less memory, because it doesn't preallocate entire buffer for maximum queue size at once. Instead it allocates individual "segments" with smaller size.

As elements are added to the queue, they fill individual segments. When elements are removed from the queue (and segments), empty segments can be thrown away. This doesn't change memory usage of the queue when it's full, but should decrease its memory footprint when it's empty (queue will keep max 1 segment in such case).

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>

* Modify test to work with low resolution timer.

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>

* Improve comments.

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>
2022-06-29 17:51:27 +05:30
Xiaochao Dong (@damnever) 6b042da2d8 Fix panic if series is not found when deleting series
Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
2022-06-24 15:55:32 +08:00
Steve Azzopardi 04fe2c9522
fix(tsdb): inc mmap corruption counter on mmap out of sequence error (#10406)
What
---
When we see out of sequence chunks increase the chunk corruption counter
to indicate that one of the chunks was corrupted.

Reference: https://github.com/prometheus/prometheus/pull/10406#issuecomment-1142595527
Signed-off-by: Steve Azzopardi <steveazz@outlook.com>
2022-06-22 14:03:12 +05:30
Peter Štibraný 03a2313f7a
Reduce chunk write queue memory usage (#10873)
* dont waste space on the chunkRefMap
* add time factor
* add comments
* better readability
* add instrumentation and more comments
* formatting
* uppercase comments
* Address review feedback. Renamed "free" to "shrink" everywhere, updated comments and threshold to 1000.
* double space

Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
Co-authored-by: Peter Štibraný <pstibrany@gmail.com>

Co-authored-by: Mauro Stettler <mauro.stettler@gmail.com>
2022-06-17 13:11:39 +05:30
Bryan Boreham 9f77d23889
tsdb: commit data periodically in CreateBlock (#10788)
To avoid building up data in memory, commit and make a new appender
periodically.

The number `commitAfter = 10000` was chosen arbitrarily; testing with
10x more or less gives slightly worse results.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-17 11:26:19 +05:30
Łukasz Mierzwa d65f037def
Don't increment prometheus_tsdb_compactions_failed_total when context is canceled (#10772)
When restarting Prometheus I sometimes see:

caller=db.go:832 level=error component=tsdb msg="compaction failed" err="compact head: persist head block: 2 errors: populate block: context canceled; context canceled"

And prometheus_tsdb_compactions_failed_total metric gets incremented.
This makes it more difficult to write alerts based on
prometheus_tsdb_compactions_failed_total metric since any restart can
trigger it.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-06-17 11:21:43 +05:30
Bryan Boreham 542b9ecdbd
tsdb: reduce sleep time when reading WAL (#10859)
The code sleeps for a short time to allow goroutines to finish, however
it seems the duration can be reduced a lot, speeding up the reading
process.

I checked using some WAL data from production, and the queue is almost
always empty at the time we enter `waitForIdle()` so there is no danger
of spinning in the tight loop.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-12 11:54:11 +05:30
Bryan Boreham 9f79a6f4b5
tsdb: faster CRC check by avoiding allocations (#10789)
Instead of creating a new hashing object every time, call `crc32.Checksum`
which computes the answer without allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-08 08:00:59 +05:30
Matej Gera 1dd247f68b
Remote Write: Rename confusing walDir parameter to dir (#10464)
* Rename walDir parameter to dir

Signed-off-by: Matej Gera <matejgera@gmail.com>

* Improve NewQueueManager comment

Signed-off-by: Matej Gera <matejgera@gmail.com>
2022-05-30 21:45:30 -07:00
David Leadbeater 57f4aab27d
Update godoc links and remove note about TSDB versioning (#10754)
Signed-off-by: David Leadbeater <dgl@dgl.cx>
2022-05-26 18:34:43 +10:00
maizige 10b677b826
fix typo (#10696)
Update doc comment

Signed-off-by: gemaizi <864321211@qq.com>
2022-05-25 18:01:45 +02:00
Filip Petkovski d3cb39044e
Fix typo in symbol table size exceeded error message (#10746)
This commit fixes a typo when reporting an error that the the symbols
table size has been exceeded.

Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
2022-05-25 10:40:36 +02:00
Julien Pivotto 6e3a0efe40
Make necessary change to compile promql parser to wasm (#10683)
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2022-05-12 09:12:05 +02:00
Matthias Rampke 78f2645787
test(tsdb): break up repeated test to avoid timeout (#10671)
On macOS, the TestTombstoneCleanRetentionLimitsRace performs very
poorly. It takes more than a second to write out one block, and as it
writes 400 of them, we run into the 10-minute test timeout frequently.

While this doesn't fix the actual performance issue, breaking each
iteration into a subtest makes the test pass reliably (because each
iteration comfortably finishes in under a minute).

Related report: https://groups.google.com/g/prometheus-developers/c/jxQ6Ayg6VJ4/m/03H_DS9PDAAJ

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-09 00:39:26 +02:00
Łukasz Mierzwa 88f9b248b4
Correctly format error message (#10669)
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-05-06 00:42:31 +02:00
Bryan Boreham 4b9f248e85
unit tests: make all Labels sorted alphabetically (#10532)
"Labels is a sorted set of labels. Order has to be guaranteed upon
instantiation." says the comment, so fix all the tests that break this
rule.

For `BenchmarkLabelValuesWithMatchers()` and
`BenchmarkHeadLabelValuesWithMatchers()` the amount of work done changes
significantly if you put the labels in order, because all series refs
get neatly partitioned by the `tens` label, so I renamed the labels
to maintain the previous behaviour.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-05-04 23:41:36 +02:00
Matthieu MOREL e2ede285a2
refactor: move from io/ioutil to io and os packages (#10528)
* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir

Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
2022-04-27 11:24:36 +02:00
Oleg Zaytsev af0f6da5cb
Fix chunk overflow appending samples at a variable rate (#10607)
* Add a test with variable samples rate append

This test overflows the chunk created in memseries, and the total amount
of samples in the (only) mmapped chunk is 29, instead of the 65565
appended ones.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Cut new chunk when rate prediction was wrong

When appending samples at a slow rate, and then appending at a higher
rate, the prediction we made to cut a new chunk is no longer valid.
Sometimes this can even cause an overflow in the chunk, if more samples
than uint16 can hold are appended.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Improve comment on 2*samplesPerChunk

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Assert that all chunks have less than 240 samples

Also, trigger new chunk at 240, not at more than 240

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2022-04-20 14:54:20 +02:00
Paschalis Tsilias 40c1efe8bc
tsdb/agent: Ignore duplicate exemplars (#10595)
* tsdb/agent: Ignore duplicate exemplars

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Make each exemplar unique in TestCommit

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Re-Trigger CI for Windows and UI-related steps

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Change test comment to properly re-trigger pipeline

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Defer Close() calls for test agent and segment reader

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
2022-04-18 11:41:04 -04:00
Julien Pivotto 685ce9964d
Merge pull request #10599 from prometheus/release-2.35
Merge back release 2.35
2022-04-15 00:10:06 +02:00