* write chunks via queue, predicting chunkrefs
* fixes
* linter
* cleanup
* better test coverage
* deduplicate operations when cutting file
* better comments
* fix race
* improvements based on PR feedback
* comments
* concurrent correctness test of write queue
* linter
* use isStarted flag in chunk write queue
* separate mutex for isStarted
* fixing tests
* fix race in test
* PR feedback
* add license headers
* PR feedback
* pr feedback
* dont recreate workerCtrl channel
* address PR feedback in high concurrency read and write test
* refactor the high concurrency write queue test
* pr feedback
* use require.Eventually
* typo
* Fixing comment
Co-authored-by: Peter Štibraný <peter.stibrany@grafana.com>
* use buffered chan for chunk write queue
* fix races
* address feedback on PR
* better comment
* less type casts
* dont reeimplement varint
* move mtx above property it protects
* improve tests by using wg instead of inspecting queue
* add comment
* add comment
* writeChunkF comment
* rename isStarted -> isRunning
* prevent panic when double stopping
* fix variable name in comment
* delete outdated comment
* naming and comment fixes in TestChunkWriteQueue_WrappingAroundSizeLimit
* use require.False instead of require.Equal(t, false
* always enable chunk write queue
* dont call callback with lock held
* undo rename of curFileSequence to curFileSeq
* fix accidental change
* fix comment
* Better syntax
Co-authored-by: Peter Štibraný <peter.stibrany@grafana.com>
* Better wording in comment
Co-authored-by: Peter Štibraný <peter.stibrany@grafana.com>
* better comments to explain the use of coordinator chans in test
* fix test which broke to the async writing of chunks
* undo previous renaming of shadowing variable
* rename var threadID to workerID
* rename variables based on PR feedback
* rename pos -> ts
* better implementation of whileNotCanceled
* update querySeriesRef before using it
* rename labels -> lbls
* initialize the chunk write queue operations metric with all possible labels
* dont return error from CutNewFile
* recreate chunk ref map regularly to free
* limit how often we recreate the chunk ref map
* fix typo in comment
* better comment
Co-authored-by: Peter Štibraný <peter.stibrany@grafana.com>
* TSDB: demistify seriesRefs and ChunkRefs
The TSDB package contains many types of series and chunk references,
all shrouded in uint types. Often the same uint value may
actually mean one of different types, in non-obvious ways.
This PR aims to clarify the code and help navigating to relevant docs,
usage, etc much quicker.
Concretely:
* Use appropriately named types and document their semantics and
relations.
* Make multiplexing and demuxing of types explicit
(on the boundaries between concrete implementations and generic
interfaces).
* Casting between different types should be free. None of the changes
should have any impact on how the code runs.
TODO: Implement BlockSeriesRef where appropriate (for a future PR)
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* agent: demistify seriesRefs and ChunkRefs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Use dedicated Ref type
Throughout the code base, there are reference types masked as
regular integers. Let's use dedicated types. They are
equivalent, but clearer semantically.
This also makes it trivial to find where they are used,
and from uses, find the centralized docs.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* postpone some work until after possible return
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* clarify
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* rename feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* skip header is up to caller
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>