* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir
Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
* Write chunks via queue, predicting the refs
Our load tests have shown that there is a latency spike in the
remote write handler whenever the head chunks need to be written,
because chunkDiskMapper.WriteChunk() blocks until the chunks are written
to disk.
This adds a queue to the chunk disk mapper which makes the WriteChunk()
method non-blocking unless the queue is full. Reads can still be served
from the queue.
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* address PR feeddback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* initialize metrics without .Add(0)
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* change isRunningMtx to normal lock
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* do not re-initialize chunkrefmap
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* update metric outside of lock scope
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* add benchmark for adding job to chunk write queue
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* remove unnecessary "success" var
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* gofumpt -extra
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* avoid WithLabelValues call in addJob
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* format comments
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* addressing PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* rename cutExpectRef to cutAndExpectRef
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* use head.Init() instead of .initTime()
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* address PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* PR feedback
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* update test according to PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* replace callbackWg -> awaitCb
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* better test of truncation with empty files
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* replace callbackWg -> awaitCb
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* TSDB: demistify seriesRefs and ChunkRefs
The TSDB package contains many types of series and chunk references,
all shrouded in uint types. Often the same uint value may
actually mean one of different types, in non-obvious ways.
This PR aims to clarify the code and help navigating to relevant docs,
usage, etc much quicker.
Concretely:
* Use appropriately named types and document their semantics and
relations.
* Make multiplexing and demuxing of types explicit
(on the boundaries between concrete implementations and generic
interfaces).
* Casting between different types should be free. None of the changes
should have any impact on how the code runs.
TODO: Implement BlockSeriesRef where appropriate (for a future PR)
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* agent: demistify seriesRefs and ChunkRefs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Use dedicated Ref type
Throughout the code base, there are reference types masked as
regular integers. Let's use dedicated types. They are
equivalent, but clearer semantically.
This also makes it trivial to find where they are used,
and from uses, find the centralized docs.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* postpone some work until after possible return
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* clarify
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* rename feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* skip header is up to caller
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>