Let older head blocks be compacted once the newest once has samples at
50% of its total range. This allows the memory of the compacted blocks
to be released and garbage collected before a new head block gets
created. Thereby the number of head blocks is 1 or 2 instead of 2 or 3
and memory spikes are reduced.
Race:
Suppose we have 100 existing series inside a HeadBlock.
Now we open two appenders in two routines A1, A2 and append 30 new series and
60 new series respectively with some common series.
Both try to commit at the same time and the following happens in the given order:
A2 executes createSeries()
A1 executes createSeries() (with its common series referencing the ids from A2)
A1 persists its newlabels, samples
A2 persists its newlabels, samples
Now when reading it back, we read A1's samples which reference A2's id and
thereby fail.
Ref: prometheus/promtheus#2795
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Expose Stone as it is used in an exported method.
* Move from tombstoneReader to []Stone for the same reason as above.
* Make WAL reading a little cleaner.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Make sure no reads happen on the block when delete is in progress.
* Fix bugs in compaction.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
We need to recalculate the sorted ref list everytime we make a
Tombstones() call. This avoids that.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Test for a previous implematation of Intersect
Before we were moving the postings list everytime we create a new
chained `intersectPostings`. That was causing some postings to be
skipped. This test fails on the older version.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Advance on Seek only when valid.
Issue:
Before in mergedPostings and others we advance everytime we `Seek`,
which causes issues with `Intersect`.
Take the case, where we have a mergedPostings = m merging, a: {10, 20, 30} and
b: {15, 25, 35}. Everytime we `Seek`, we do a.Seek and b.Seek.
Now if we Intersect m with {21, 22, 23, 30}, we would do Seek({21,22,23}) which
would advance a and b beyond 30.
Fix:
Now we advance only when the seeking value is greater than the current
value, as the definition specifies.
Also, posting 0 will not be a valid posting and will be used to signal
finished or un-initialized PostingsList.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Add test for Merge+Intersect edgecase.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Add comments to trivial tests.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
NaN != NaN, so the previous code would incorrectly report
it as changed.
There's also plans to take advantage of the NaN payload,
so look at the entire value.
When calling AddFast, we check the details of the head chunk of the
referred memorySeries. But it could happen that there are no chunks in
the series at all.
Currently, we are deferring chunk creation to when we actually append
samples, but we can be sure that there will be samples if the series is
created. We will be consuming no extra memory by cutting a chunk when we
create the series.
Ref: #28 comment 2
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
This adds a workaround to avoid deadlocks for inconsistent write lock
order across headBlocks.
Things keep working if transactions only append data for the same
timestamp, which is generally the case for Prometheus.
Full behavior should be restored in a subsequent change.
The compilation and tests are broken as head.go requires sample which
has been moved to another package while moving BufferedSeriesIterator.
Duplication seemed better compared to exposing sample from tsdbutil.
This fixes different race condition encoutnered when running Prometheus.
It reduces the overall performance in the synthetic benchmark a fair bit
but has no indiciations of impacting a real-world setup notably.
This adds the Queryable interface to the Block interface. Head and
persisted blocks now implement their own Querier() method and thus
isolate customization (e.g. remapPostings) more cleanly.
This adds more lower-leve interfaces which are used to compose
to different Block interfaces.
The DB only uses interfaces instead of explicit persistedBlock and
headBlock. The headBlock generation property is dropped as the use-case
can be implemented using block sequence numbers.
The position mapper was intended to pre-computed "expensive" ordering
of label sets. It was expensive to update and caused a lot of trouble.
Skipping this optimization entirely actually revelead it was pointless
and even harmful from the e2e perspective.
This adds handling for various corruption scenarios of the WAL.
If corruption is encountered, we truncate the WAL after the last valid
entry transparently and continue appending after the offset.
This has been a common source of hard to debug issues. Its a premature
and unbenchmarked optimization and semantically, we want ChunkMetas to
be references in all changed cases.
This fixes a bug where the last WAL file was closed after consuming it
instead of being left open for further writes.
Reloading of blocks on startup considers loading head blocks now.
Introduce a seperate mutex for the head blocks to avoid a race where
a post-compaction reload may run between switching the DB's base mutex
to create a new head block in an appender.
This adds write path support for segmented chunk data files.
Files of 512MB are pre-allocated and written to. If the file size
is exceeded, the next file is started. On completion, files
are truncated to their final size.
This is an initial (and hacky) first pass on allowing
appending to multiple blocks simultaniously to avoid
dropping samples right after cutting a new head block.
It's also required for cases like the PGW, where a scrape may
contain varying timestamps.
The former approach created unordered postings list by either
map iteration of new series being unsorted (fixable) or concurrent
writers creating new series interleaved.
We switch back to generating ephemeral references for a single batch.
Newly created series have to be re-set upon the next insert.
This exposes a reference number of a series represented by a label set
to clients.
Subsequent samples can be directly added via the reference rather than
repeatedly passing in the full labels. This drasitcally speeds up the
append process.
The appender chain uses different sections of the reference number for
assignment to child appenders and invalidating reference numbers as
necessary.
Clients can either pass out reference numbers themselves or have their
own optimized lookup, i.e. by directly associating unparsed metric
descriptors strings with reference numbers.
This adds a memory series holding several chunk to replace
the single head chunk per series so far.
This is necessary for uniform maximum chunk sizes in cases
where some series have higher frequency samples than others.
This adds a 4 sample buffer to every head chunk. The XOR
compression scheme may edit bytes in place. The minimum size
of a sample is 2 bits. So keeping the last 4 samples in an in-memory
buffer makes it safe to query the preceeding ones while samples
are added
This adds a position mapper that takes series from a head block
in the order they were appended and creates a mapping representing
them in order of their label sets.
Write-repair of the postings list would cause very expensive writing.
Hence, we keep them as they are and only apply the postition mapping
at the very end, after a postings list has been sufficienctly reduced
through intersections etc.
This initializes the chunkDesc's last timestamp to the minimum
value so initial samples with a timestamp of 0 (e.g. in tests)
are not accidentally dropped.
This changes the IndexReader API to expose plain labels
and chunk meta information instead of a Series interface.
Dropping of irrelevant chunks is moved into the querier.
A LabelIndices method is added to query for existing label
value indices.
This adds interval metadata to indexed chunks. The queried interval
is used to filter chunks when queried from the index to save
unnecessary accesses of the chunks file.
This is especially relevant for series that come and go often and larger
files.