This simplifies some of the iterators by loading chunks from the
ChunkReader earlier, filtering of chunks vs filtering or series is
split into separate iterators for easier testing
Introduce a seperate mutex for the head blocks to avoid a race where
a post-compaction reload may run between switching the DB's base mutex
to create a new head block in an appender.
Expose buildQueryUrl, refactor dispatch to use
buildQueryUrl will allow users to execute queries over the range of an
existing graph. This will be helpful to select data series they wish to
annotate the graph with, for example.
This addresses an issue where the compaction triggered on cutting
a new block doesn't find anything as the writers are still active on the
block that should be ready for compaction.
We need to be able to modify the HTTP POST in Weave Cortex to add
multitenancy information to a notification. Since we only really need a
special header in the end, the other option would be to just allow
passing in headers to the notifier. But swapping out the whole Doer is
more general and allows others to swap out the network-talky bits of the
notifier for their own use. Doing this via contexts here wouldn't work
well, due to the decoupled flow of data in the notifier.
There was no existing interface containing the ctxhttp.Post() or
ctxhttp.Do() methods, so I settled on just using Do() as a swappable
function directly (and with a more minimal signature than Post).
This adds write path support for segmented chunk data files.
Files of 512MB are pre-allocated and written to. If the file size
is exceeded, the next file is started. On completion, files
are truncated to their final size.
The automatic GZIP handling of net/http does not preserve
buffers across requests and thus generates a lot of garbage.
We handle GZIP ourselves to circumvent this.t