This fixes the case where between block creations no compaction
plans are ran. We were not compacting anything in these
cases since the on creation the most recent head block always had
a high timestamp of 0.
This is an initial (and hacky) first pass on allowing
appending to multiple blocks simultaniously to avoid
dropping samples right after cutting a new head block.
It's also required for cases like the PGW, where a scrape may
contain varying timestamps.
This exposes a reference number of a series represented by a label set
to clients.
Subsequent samples can be directly added via the reference rather than
repeatedly passing in the full labels. This drasitcally speeds up the
append process.
The appender chain uses different sections of the reference number for
assignment to child appenders and invalidating reference numbers as
necessary.
Clients can either pass out reference numbers themselves or have their
own optimized lookup, i.e. by directly associating unparsed metric
descriptors strings with reference numbers.
This adds naive compaction that tries to compact three
blocks of roughly equal size.
It decides based on samples present in a block and has no
safety measures considering the actual file size.
This adds a proper duration based lookback buffer for series iterators
to allow advancing sequentially while remaining able to calculate time
aggregating functions such as `rate` backwards.
It uses an array ring buffer to minimize heap allocations for
potentially hundreds of thousands of series for a single query.