The purpose of GetRef() is to allow Append() to be called without
the caller needing to copy the labels. To avoid a race where a series
is removed from TSDB between the calls to GetRef() and Append(), we
return TSDB's copy of the labels.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add method to get reference number for TSDB Appender
In situations where we need to copy labels before calling Add(),
GetRef() allows to check first, then call AddFast() in the case that the series
is already known.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add explicit interface for GetRef() method
Suggested in code review by @bwplotka
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Rename OptionalGetRef to GetRef
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Simplify return value of GetRef()
0 can be relied on to mean 'no reference'
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The main branch tests are not passing due to the fact that #8489 was not
rebased on top of #8007.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
Right now a new segment might be created unnecessarily if the
uncompressed record would not fit, but after compression (typically
reducing record size in half) it would.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* CleanupTombstones refactored, now reloading blocks after every compaction.
The goal is to remove deletable blocks after every compaction and, thus, decrease disk space used when cleaning tombstones.
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Protect DB against parallel reloads
Signed-off-by: ArthurSens <arthursens2005@gmail.com>
* Fix typos
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
Co-authored-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
In the previous version, 1.18.0, the "megacheck" linter paid attention
to the '//lint:ignore' comment, but that is no longer there.
Newer version pay attention to '//nolint:<linter>,<linter>,...'
comments, optionally followed by a "second" comment introduced by '//'.
Update the directives to use this style.
This is related to prometheus/blackbox_exporter#738 and
prometheus/blackbox_exporter#745.
Signed-off-by: Marcelo E. Magallon <marcelo.magallon@grafana.com>
We're seeing compactions that are taking hours in Cortex which this is
missing. I know while it is not common in Prometheus, I am pretty sure
there are setups where compaction takes longer than 512s. On our own
Prometheus the average compaction duration is 566s.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Fix TSDB head struct dump on querier error
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added mint/maxt to RangeHead.String()
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* test: cleanup tempdir for TestBlockWriter
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
* test: cleanup tempdir for TestLogPartialWrite
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
* fix: remove pre-2.21 tmp blocks on start
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: commenting
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* tsdb: Expose total number of label pairs in head
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: add comment for NumLabelPairs
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: remove comment
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* Logging added for when compaction takes more than the block time range
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Log only if no errors were already logged
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Log duration as human readable string
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Move logging from compactHead() to Compact()
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Compute duration of all head compactions plus wal truncation
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Remove named return added os first commits
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Address nits
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Change miliseconds to seconds to make fuzzit tests happy
Signed-off-by: ArthurSens <arthursens2005@gmail.com>
* Set the min time of Head properly after truncation
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Fix lint
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Enhance compaction plan logic for completely deleted small block
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Fix review comments
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* MultiError: Refactored MultiError for more concise and safe usage.
* Less lines
* Goland IDE was marking every usage of old MultiError "potential nil" error
* It was easy to forgot using Err() when error was returned, now it's safely assured on compile time.
NOTE: Potentially I would rename package to merrors. (: In different PR.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed review comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix after rebase.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Don't use returned DB to close resources on TSDB startup error
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Add unit test and fix another panic
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Fix review comment
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Close resources after failing to startup TSDB
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Return close error instead of logging
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Change named return's name
Signed-off-by: arthursens <arthursens2005@gmail.com>
This is how much memory we use to load in the on-disk
symbol tables, not the size of the tables themselves.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
As we're looking to expand what's in the WAL,
having old Prometheus servers ignore the new record types
rather than treating them as corruption allows for better
upgrade/downgrade paths.
Adjust some tests accordingly, so they're still testing what they're
meant to test.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Direct syscalls using syscall.Syscall(SYS_*, ...) should no longer be
used on darwin, see [1]. Instead, use the FcntlFstore libSystem wrapper
provided by the golang.org/x/sys/unix package to implement
preallocFixed.
[1] https://golang.org/doc/go1.12#darwin
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
* Chore: Log segment number when segment read failed
To manually fix the WAL files, it is good to know where the corrupt
happened so we should log the segment number when the read failed.
Related Issue #7506
Signed-off-by: gaston.qiu <gaston.qiu@umbocv.com>
* tsdb: Bug fix for further continued after crash deletions; added more tests.
Additionally: Added log line for block removal.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comment.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
## Changes:
* Rename dir when deleting
* Ignoring blocks with broken meta.json on start (we do that on reload)
* Compactor writes <ulid>.tmp-for-creation blocks instead of just .tmp
* Delete tmp-for-creation and tmp-for-deletion blocks during DB open.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
I did not want to move those in previous PR to make it easier to review. Now small cleanup time for readability. (:
## Changes
* Merge series goes to `storage/merge.go` leaving `fanout.go` for just fanout code.
* Moved `fanout test` code from weird separate package to storage.
* Unskiped one test: TestFanout_SelectSorted/chunk_querier
* Moved block series set codes responsible for querying blocks to `querier.go` from `compact.go`
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating.
Chained to https://github.com/prometheus/prometheus/pull/7059
* NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it.
* Added single SeriesEntry / ChunkEntry for all series implementations.
* Unified all vertical, and non vertical for compact and querying to single
merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before)
* Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples.
* Refactored endpoint tests and querier tests to include subtests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comments from Brian and Beorn.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed snapshot test and added chunk iterator support for DBReadOnly.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed race when iterating over Ats first.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed populate block tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed endpoints test.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed test.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added test & fixed case of head open chunk.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed DBReadOnly tests and bug producing 1 sample chunks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added cases for partial block overlap for multiple full chunks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added extra tests for chunk meta after compaction.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed small vertical merge bug and added more tests for that.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* storage: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* cmd: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* scripts: Verify that we are not using restricted packages
It checks that we are not directly importing 'sync/atomic'.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Reorganise imports in blocks
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier/test: Apply PR suggestions
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* storage/remote: avoid storing references on newEntry
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Revert "scripts: Verify that we are not using restricted packages"
This reverts commit 278d32748e.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Group imports accordingly
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb/chunks: Replace sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb/heaad: Replace sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* vendor: Make go.uber.org/atomic a direct dependency
There is no modifications to go.sum and vendor/ because
it was already vendored.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb: Remove comments referring to the sync/atomic alignment bug
Related: https://golang.org/pkg/sync/atomic/#pkg-note-BUG
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* no panic the head memseries has chunks in it
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
* fix a panic when querying after a wal corruption.
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
* review nits
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
* Add test for reading the data after a wal corruption.
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
Update tsdb/db_test.go
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Update tsdb/db_test.go
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
* spellings
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Fix race during head compaction
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Comment out the test
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Skip test instead of commenting it out
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Optimized label regex matcher with literal prefix and/or suffix
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added more tests cases with newlines
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Restored deleted test
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
In next commit: Changes for TSDB.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed bstream test license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Simplified bstreamReader.loadNextBuffer()
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Fixed date in license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Add errors and Warnings to SeriesSet
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Change Querier interface and refactor accordingly
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor promql/engine to propagate warnings at eval stage
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Make sure all the series from all Selects are pre-advanced
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Separate merge series sets
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Clean
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor merge querier failure handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.
* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix formatting
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix CI issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Added final tests for error handling.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.
NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Utilize errWithWarnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix recently introduced expansion issue
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add tests for secondary querier error handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Implement lazy merge
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add name to test cases
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Reorganize
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Remove redundant warnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix rebase mistake
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Moves the 64bit atomically accessed field to the top of the struct.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Moves the 64bit atomically accessed field to the top of the struct.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Fixing up go fmt formatting issues.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
Co-authored-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Track open appenders in doubly-linked list to make lowWatermark O(1).
* Use RW locks.
* Added BenchmarkIsolationWithState.
Signed-off-by: Peter Štibraný <peter.stibrany@grafana.com>
* add time range params to labelNames api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* evaluate min/max time range when reading labels from the head
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelValues api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test, add docs
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add a test for head min max range
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test to match comment
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* address CR comments
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* combine vars only used once
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelNames api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* evaluate min/max time range when reading labels from the head
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelValues api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test, add docs
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add a test for head min max range
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test to match comment
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* address CR comments
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* combine vars only used once
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* restart ci
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* use range expectedLabelNames instead of range actualLabelNames in test
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* Callbacks for lifecycle of series in TSDB
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Add more comments
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
Prior to this commit we could have situations where we are creating an
appenderId but never creating an appender to go with it, therefore
blocking the low watermak.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Previously we were keeping up to around 6 hours of WAL around by
removing 1/3 every hours. This was excessive, so switch to removing 2/3
which will up to around 3 hours of WAL around.
This will roughly halve the size of the WAL and halve startup time for
those who are I/O bound. This may increase the checkpoint size for
those with certain churn patterns, but by much less than we're saving
from the segments.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
time.Unix attaches the local timezone, which can then
leak out (e.g. in the alert json). While this is harmless,
we should be consistent.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* storage: Added Chunks{Queryable/Querier/SeriesSet/Series/Iteratable. Added generic Merge{SeriesSet/Querier} implementation.
## Rationales:
In many places (e.g. chunk Remote read, Thanos Receive fetching chunk from TSDB), we operate on encoded chunks not samples.
This means that we unnecessary decode/encode, wasting CPU, time and memory.
This PR adds chunk iterator interfaces and makes the merge code to be reused between both seriesSets
I will make the use of it in following PR inside tsdb itself. For now fanout implements it and mergers.
All merges now also allows passing series mergers. This opens doors for custom deduplications other than TSDB vertical ones (e.g. offline one we have in Thanos).
## Changes
* Added Chunk versions of all iterating methods. It all starts in Querier/ChunkQuerier. The plan is that
Storage will implement both chunked and samples.
* Added Seek to chunks.Iterator interface for iterating over chunks.
* NewMergeChunkQuerier was added; Both this and NewMergeQuerier are now using generigMergeQuerier to share the code. Generic code was added.
* Improved tests.
* Added some TODO for further simplifications in next PRs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Moved s/Labeled/SeriesLabels as per Krasi suggestion.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Krasi's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Second iteration of Krasi comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Another round of comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.
I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.
Also I don't know what TestSelectSorted was testing, but now it's testing sorting.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix bug with WAL watcher and Live Reader metrics usage.
Calling NewXMetrics when creating a Watcher or LiveReader results in a
registration error, which we're ignoring, and as a result other than the
first Watcher/Reader created, we had no metrics for either. So we would
only have metrics like Watcher Records Read for the first remote write
config in a users config file.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
This fixes#6992, which was introduced by #6777. There was an
intermediate component which translated TSDB errors into storage errors,
but that component was deleted and this bug went unnoticed, until we
were watching at the Prombench results. Without this, scrape will fail
instead of dropping samples or using "Add" when the series have been
garbage collected.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
With defer having less of a performance penalty, there is no reason
not to do those crucial operations via defer.
Context: With isolation in place, if we forget to Commit/Rollback, the
low watermark will get stuck forever.
The current code should not have any bugs, but moving to defer helps
to avoid future bugs.
This is also moving the `closeAppend` in the `Commit` implementation
itself to defer. If logging to the WAL fails, we would have missed the
`closeAppend`.
Signed-off-by: beorn7 <beorn@grafana.com>
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.
I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.
Also I don't know what TestSelectSorted was testing, but now it's testing sorting.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
I think the previous behavior is problematic as it will leave
`memSeries` around that still have `pendingCommit` set to `true`.
The only case where this can happen in this code path is a failure to
write to the WAL, in which case we are probably in trouble anyway. I
believe, however, we should still try to do the right thing and do the
full rollback. This will implicitly try to write to the WAL again, but
this time without samples, which may even succeed. (But we propagate
the previous error in any case.)
This also adds `a.head.putSeriesBuffer(a.sampleSeries)` to Rollback,
which was previously missing.
Signed-off-by: beorn7 <beorn@grafana.com>
This is taken from #6918. Since we probably won't merge #6918 before
the relase, we have to do this bit of it as it fixes an actual bug
(iso.closeAppend is not called if the append fails because of an error
logging to the WAL).
Signed-off-by: beorn7 <beorn@grafana.com>
* tsdb: don't allow ingesting empty labelsets
When we ingest an empty labelset in the head, further blocks can not be
compacted, with the error:
```
level=error ts=2020-02-27T21:26:58.379Z caller=db.go:659 component=tsdb
msg="compaction failed" err="persist head block: write compaction:
add series: out-of-order series added with label set \"{}\" / prev:
\"{}\""
```
We should therefore reject those invalid empty labelsets upfront.
This can be reproduced with the following:
```
cat << END > prometheus.yml
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 1s
basic_auth:
username: test
password: test
metric_relabel_configs:
- regex: ".*"
action: labeldrop
static_configs:
- targets:
- 127.0.1.1:9090
END
./prometheus --storage.tsdb.min-block-duration=1m
```
And wait a few minutes.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Series() will fetch all the metadata for a series,
even if it's going to be filtered later due to time ranges.
For 1M series we save ~1.1s if you only needed some of the data, but take an
extra ~.2s if you did want everything.
benchmark old ns/op new ns/op delta
BenchmarkHeadSeries/1of1000000-4 1443715987 131553480 -90.89%
BenchmarkHeadSeries/10of1000000-4 1433394040 130730596 -90.88%
BenchmarkHeadSeries/100of1000000-4 1437444672 131360813 -90.86%
BenchmarkHeadSeries/1000of1000000-4 1438958659 132573137 -90.79%
BenchmarkHeadSeries/10000of1000000-4 1438061766 145742377 -89.87%
BenchmarkHeadSeries/100000of1000000-4 1455060948 281659416 -80.64%
BenchmarkHeadSeries/1000000of1000000-4 1633524504 1803550153 +10.41%
benchmark old allocs new allocs delta
BenchmarkHeadSeries/1of1000000-4 4000055 28 -100.00%
BenchmarkHeadSeries/10of1000000-4 4000073 87 -100.00%
BenchmarkHeadSeries/100of1000000-4 4000253 630 -99.98%
BenchmarkHeadSeries/1000of1000000-4 4002053 6036 -99.85%
BenchmarkHeadSeries/10000of1000000-4 4020053 60054 -98.51%
BenchmarkHeadSeries/100000of1000000-4 4200053 600074 -85.71%
BenchmarkHeadSeries/1000000of1000000-4 6000053 6000094 +0.00%
benchmark old bytes new bytes delta
BenchmarkHeadSeries/1of1000000-4 229192184 2488 -100.00%
BenchmarkHeadSeries/10of1000000-4 229193336 5568 -100.00%
BenchmarkHeadSeries/100of1000000-4 229204856 35536 -99.98%
BenchmarkHeadSeries/1000of1000000-4 229320056 345104 -99.85%
BenchmarkHeadSeries/10000of1000000-4 230472056 3894673 -98.31%
BenchmarkHeadSeries/100000of1000000-4 241992056 40511632 -83.26%
BenchmarkHeadSeries/1000000of1000000-4 357192056 402380440 +12.65%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.
* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.
No logic was changed, only different source of abstractions, so no need for benchmarks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Exports metric for WAL write errors
Signed-off-by: John McBride <jpmmcbride@gmail.com>
* Correct name for counter
Signed-off-by: John McBride <jpmmcbride@gmail.com>
* Move WAL write failure to wal.go
Signed-off-by: John McBride <jpmmcbride@gmail.com>
* WAL write fail metric moved to Log for external consumers
Signed-off-by: John McBride <jpmmcbride@gmail.com>
* tsdb: register compactions_skipped_total
That metric was not registered.
I also reordered the metrics in the list.
* tsdb: display correct error when WAL can't be read
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Add back Windows CI, we lost it when tsdb was merged into the prometheus
repo. There's many tests failing outside tsdb, so only test tsdb for
now.
Fixes#6513
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Rather than buffer up symbols in RAM, do it one by one
during compaction. Then use the reader's symbol handling
for symbol lookups during the rest of the index write.
There is some slowdown in compaction, due to having to look through a file
rather than a hash lookup. This is noise to the overall cost of compacting
series with thousands of samples though.
benchmark old ns/op new ns/op delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 539917175 675341565 +25.08%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 2441815993 2477453524 +1.46%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 3978543559 3922909687 -1.40%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 8430219716 8586610007 +1.86%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 1786424591 1909552782 +6.89%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 5328998202 6020839950 +12.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 10085059958 11085278690 +9.92%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 25497010155 27018079806 +5.97%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 2427391406 2817217987 +16.06%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 2592965497 2538805050 -2.09%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 2437388343 2668012858 +9.46%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 2317095324 2787423966 +20.30%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 2600239857 2096973860 -19.35%
benchmark old allocs new allocs delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 500851 470794 -6.00%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 821527 791451 -3.66%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 1141562 1111508 -2.63%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 2141576 2111504 -1.40%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 871466 841424 -3.45%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 1941428 1911415 -1.55%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 3071573 3041510 -0.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 6771648 6741509 -0.45%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 731493 824888 +12.77%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 793918 887311 +11.76%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 811842 905204 +11.50%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 832244 925081 +11.16%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 921553 1019162 +10.59%
benchmark old bytes new bytes delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 40532648 35698276 -11.93%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 60340216 53409568 -11.49%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 81087336 72065552 -11.13%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 142485576 120878544 -15.16%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4 208661368 203831136 -2.31%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4 347345904 340484696 -1.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4 585185856 576244648 -1.53%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4 1357641792 1358966528 +0.10%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 126486664 119666744 -5.39%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 122323192 115117224 -5.89%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 126404504 119469864 -5.49%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 119047832 112230408 -5.73%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 136576016 116634800 -14.60%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Rather than keeping the entire symbol table in memory, keep every nth
offset and walk from there to the entry we need. This ends up slightly
slower, ~360ms per 1M series returned from PostingsForMatchers which is
not much considering the rest of the CPU such a query would go on to
use.
Make LabelValues use the postings tables, rather than having
to do symbol lookups. Use yoloString, as PostingsForMatchers
doesn't need the strings to stick around and adjust the API
call to keep the Querier open until it's all marshalled.
Remove allocatedSymbols memory optimisation, we no longer keep all the
symbol strings in heap memory. Remove LabelValuesFor and LabelIndices,
they're dead code. Ensure we've still tests for label indices,
and add missing test that we can work with old V1 Format index files.
PostingForMatchers performance is slightly better, with a big drop in
allocation counts due to using yoloString for LabelValues:
benchmark old ns/op new ns/op delta
BenchmarkPostingsForMatchers/Block/n="1"-4 36698 36681 -0.05%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 522786 560887 +7.29%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 511652 537680 +5.09%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 522102 564239 +8.07%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 113689911 111795919 -1.67%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 135825572 132871085 -2.18%
BenchmarkPostingsForMatchers/Block/i=~""-4 40782628 38038181 -6.73%
BenchmarkPostingsForMatchers/Block/i!=""-4 31267869 29194327 -6.63%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 112733329 111568823 -1.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 112868153 111232029 -1.45%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 31338257 29349446 -6.35%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 32054482 29972436 -6.50%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 136504654 133968442 -1.86%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 27960350 27264997 -2.49%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 136765564 133860724 -2.12%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 163714583 159453668 -2.60%
benchmark old allocs new allocs delta
BenchmarkPostingsForMatchers/Block/n="1"-4 6 6 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 11 11 +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 11 11 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 17 15 -11.76%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 100012 12 -99.99%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 200040 100040 -49.99%
BenchmarkPostingsForMatchers/Block/i=~""-4 200045 100045 -49.99%
BenchmarkPostingsForMatchers/Block/i!=""-4 200041 100041 -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 100017 17 -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 100023 23 -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 200046 100046 -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 200050 100050 -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 200049 100049 -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 111150 11150 -89.97%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 200055 100055 -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 311238 111238 -64.26%
benchmark old bytes new bytes delta
BenchmarkPostingsForMatchers/Block/n="1"-4 296 296 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 424 424 +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 424 424 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 552 1544 +179.71%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 1600482 1606125 +0.35%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 17259065 17264709 +0.03%
BenchmarkPostingsForMatchers/Block/i=~""-4 17259150 17264780 +0.03%
BenchmarkPostingsForMatchers/Block/i!=""-4 17259048 17264680 +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 1600610 1606242 +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 1600813 1606434 +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 17259176 17264808 +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 17259304 17264936 +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 17259333 17264965 +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 3142628 3148262 +0.18%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 17259509 17265141 +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 20405680 20416944 +0.06%
However overall Select performance is down and involves more allocs, due to
having to do more than a simple map lookup to resolve a symbol and that all the strings
returned are allocated:
benchmark old ns/op new ns/op delta
BenchmarkQuerierSelect/Block/1of1000000-4 506092636 862678244 +70.46%
BenchmarkQuerierSelect/Block/10of1000000-4 505638968 860917636 +70.26%
BenchmarkQuerierSelect/Block/100of1000000-4 505229450 882150048 +74.60%
BenchmarkQuerierSelect/Block/1000of1000000-4 515905414 862241115 +67.13%
BenchmarkQuerierSelect/Block/10000of1000000-4 516785354 874841110 +69.29%
BenchmarkQuerierSelect/Block/100000of1000000-4 540742808 907030187 +67.74%
BenchmarkQuerierSelect/Block/1000000of1000000-4 815224288 1181236903 +44.90%
benchmark old allocs new allocs delta
BenchmarkQuerierSelect/Block/1of1000000-4 4000020 6000020 +50.00%
BenchmarkQuerierSelect/Block/10of1000000-4 4000038 6000038 +50.00%
BenchmarkQuerierSelect/Block/100of1000000-4 4000218 6000218 +50.00%
BenchmarkQuerierSelect/Block/1000of1000000-4 4002018 6002018 +49.97%
BenchmarkQuerierSelect/Block/10000of1000000-4 4020018 6020018 +49.75%
BenchmarkQuerierSelect/Block/100000of1000000-4 4200018 6200018 +47.62%
BenchmarkQuerierSelect/Block/1000000of1000000-4 6000018 8000019 +33.33%
benchmark old bytes new bytes delta
BenchmarkQuerierSelect/Block/1of1000000-4 176001468 227201476 +29.09%
BenchmarkQuerierSelect/Block/10of1000000-4 176002620 227202628 +29.09%
BenchmarkQuerierSelect/Block/100of1000000-4 176014140 227214148 +29.09%
BenchmarkQuerierSelect/Block/1000of1000000-4 176129340 227329348 +29.07%
BenchmarkQuerierSelect/Block/10000of1000000-4 177281340 228481348 +28.88%
BenchmarkQuerierSelect/Block/100000of1000000-4 188801340 240001348 +27.12%
BenchmarkQuerierSelect/Block/1000000of1000000-4 304001340 355201616 +16.84%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
With recent speed improvements to populate block,
the cancellation test now fails regularly on CI.
Use contexts to get the index writer to shut down
much faster, and that allows us to make the cancellation
test faster too.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Flushing buffers and doing a pwrite per posting is expensive
time wise, so go back to the old way for those. This doubles
our memory usage, but that's still small as it's only
~8 bytes per time series in the index. This is 30-40% faster.
benchmark old ns/op new ns/op delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 1101429174 724362123 -34.23%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 1074466374 720977022 -32.90%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 1166510282 677702636 -41.90%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 1075013071 696855960 -35.18%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 1231673790 829328610 -32.67%
benchmark old allocs new allocs delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 832571 731435 -12.15%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 894875 793823 -11.29%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 912931 811804 -11.08%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 933511 832366 -10.83%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 1022791 921554 -9.90%
benchmark old bytes new bytes delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4 129063496 126472364 -2.01%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4 124154888 122300764 -1.49%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4 128790648 126394856 -1.86%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4 120570696 118946548 -1.35%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4 138754288 136317432 -1.76%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Rather than building up a 2nd copy of all the posting
tables, construct it from the data we've already written
to disk. This takes more time, but saves memory.
Current benchmark numbers have this as slightly faster, but that's
likely due to the synthetic data not having many label names.
Memory usage is roughly halved for the relevant bits.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Rather than keeping the offset of each postings list, instead
keep the nth offset of the offset of the posting list. As postings
list offsets have always been sorted, we can then get to the closest
entry before the one we want an iterate forwards.
I haven't done much tuning on the 32 number, it was chosen to try
not to read through more than a 4k page of data.
Switch to a bulk interface for fetching postings. Use it to avoid having
to re-read parts of the posting offset table when querying lots of it.
For a index with what BenchmarkHeadPostingForMatchers uses RAM
for r.postings drops from 3.79MB to 80.19kB or about 48x.
Bytes allocated go down by 30%, and suprisingly CPU usage drops by
4-6% for typical queries too.
benchmark old ns/op new ns/op delta
BenchmarkPostingsForMatchers/Block/n="1"-4 35231 36673 +4.09%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 563380 540627 -4.04%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 536782 534186 -0.48%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 533990 541550 +1.42%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 113374598 117969608 +4.05%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 146329884 139651442 -4.56%
BenchmarkPostingsForMatchers/Block/i=~""-4 50346510 44961127 -10.70%
BenchmarkPostingsForMatchers/Block/i!=""-4 41261550 35356165 -14.31%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 112544418 116904010 +3.87%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 112487086 116864918 +3.89%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 41094758 35457904 -13.72%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 41906372 36151473 -13.73%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 147262414 140424800 -4.64%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 28615629 27872072 -2.60%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 147117177 140462403 -4.52%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 175096826 167902298 -4.11%
benchmark old allocs new allocs delta
BenchmarkPostingsForMatchers/Block/n="1"-4 4 6 +50.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 7 11 +57.14%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 7 11 +57.14%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 15 17 +13.33%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 100010 100012 +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 200069 200040 -0.01%
BenchmarkPostingsForMatchers/Block/i=~""-4 200072 200045 -0.01%
BenchmarkPostingsForMatchers/Block/i!=""-4 200070 200041 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 100013 100017 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 100017 100023 +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 200073 200046 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 200075 200050 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 200074 200049 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 111165 111150 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 200078 200055 -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 311282 311238 -0.01%
benchmark old bytes new bytes delta
BenchmarkPostingsForMatchers/Block/n="1"-4 264 296 +12.12%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4 360 424 +17.78%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4 360 424 +17.78%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4 520 552 +6.15%
BenchmarkPostingsForMatchers/Block/i=~".*"-4 1600461 1600482 +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4 24900801 17259077 -30.69%
BenchmarkPostingsForMatchers/Block/i=~""-4 24900836 17259151 -30.69%
BenchmarkPostingsForMatchers/Block/i!=""-4 24900760 17259048 -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4 1600557 1600621 +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4 1600717 1600813 +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4 24900856 17259176 -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4 24900952 17259304 -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4 24900993 17259333 -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4 3788311 3142630 -17.04%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4 24901137 17259509 -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4 28693086 20405680 -28.88%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
We can instead write it as we go, and then go back and write in the
length at the end.
Also fix the compaction benchmark, which indicates no changes.
For the benchmark, this brings maximum memory usage of the buffers
from ~200kB down to 128B.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Fix tsdb panic when querying corrupted chunks.
check that the chunk segment has enough data to read all chunk pieces.
* refactor, simplify and add tests.
* simpfiy WriteChunks implementation
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
* Added CreateBlock and CreateHead functions to new file to make it reusable across packages.
Signed-off-by: Dipack P Panjabi <dipack.panjabi@gmail.com>
* Make WAL replay benchmark more representative
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* Move decoding records from the WAL into goroutine
Decoding the WAL records accounts for a significant amount of time on
startup, and can be done in parallel with creating series/samples to
speed up startup. However, records still must be handled in order, so
only a single goroutine can do the decoding.
benchmark
old ns/op new ns/op delta
BenchmarkLoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200-8
481607033 391971490 -18.61%
BenchmarkLoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50-8
836394378 629067006 -24.79%
BenchmarkLoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480-8
348238658 234218667 -32.74%
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* Adding TSDB Head Stats like cardinality to Status Page
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Moving mutx to Head
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Renaming variabls
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Renaming variabls and html
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Removing unwanted whitespaces
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Adding Tests, Banchmarks and Max Heap for Postings Stats
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Adding more tests for postingstats and web handler
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Adding more tests for postingstats and web handler
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Remove generated asset file that is no longer used
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* Changing comment and variable name for more readability
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
* Using time.Duration in postings status function and removing refresh button from web page
Signed-off-by: Sharad Gaur <sgaur@splunk.com>
The WAL Watcher replays a checkpoint after it is created in order to
garbage collect series that no longer exist in the WAL. Currently the
garbage collection process is done serially with reading from the tip of
the WAL which can cause large delays in writing samples to remote
storage just after compaction occurs.
This also fixes a memory leak where dropped series are not cleaned up as
part of the SeriesReset process.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
This PR gives the readonly DB the ability to create blocks from the WAL.
In order to implement this, we modify DBReadOnly.Blocks() to return an
empty slice and no error if no blocks are found.
xref: https://github.com/prometheus/tsdb/issues/346#issuecomment-520786524
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>