mirror of
https://github.com/prometheus/prometheus.git
synced 2024-11-10 07:34:04 -08:00
3c80963e81
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks. When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call. If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending our sample to it. Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed. When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes. Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait for it to be mmapped. If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting queries and scrapes. Queries might timeout, since by default they have a 2 minute timeout set. Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything. To avoid this we need to remove mmapping from append path, since mmapping is blocking. But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later. This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples, while older, yet to be mmapped, chunks are linked to it. Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger it manually, which reduces the risk that it will have to compete for mmap locks with other chunks. Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com> |
||
---|---|---|
.. | ||
agent | ||
chunkenc | ||
chunks | ||
docs | ||
encoding | ||
errors | ||
fileutil | ||
goversion | ||
index | ||
record | ||
testdata | ||
tombstones | ||
tsdbutil | ||
wlog | ||
.gitignore | ||
block.go | ||
block_test.go | ||
blockwriter.go | ||
blockwriter_test.go | ||
CHANGELOG.md | ||
compact.go | ||
compact_test.go | ||
db.go | ||
db_test.go | ||
example_test.go | ||
exemplar.go | ||
exemplar_test.go | ||
head.go | ||
head_append.go | ||
head_bench_test.go | ||
head_read.go | ||
head_read_test.go | ||
head_test.go | ||
head_wal.go | ||
isolation.go | ||
isolation_test.go | ||
mocks_test.go | ||
ooo_head.go | ||
ooo_head_read.go | ||
ooo_head_read_test.go | ||
ooo_head_test.go | ||
querier.go | ||
querier_bench_test.go | ||
querier_test.go | ||
README.md | ||
repair.go | ||
repair_test.go | ||
tsdbblockutil.go | ||
wal.go | ||
wal_test.go |
TSDB
This directory contains the Prometheus TSDB (Time Series DataBase) library, which handles storage and querying of all Prometheus v2 data.
Documentation
External resources
- A writeup of the original design can be found here.
- Video: Storing 16 Bytes at Scale from PromCon 2017.
- Compression is based on the Gorilla TSDB white paper.
A series of blog posts explaining different components of TSDB: