* storage/remote: adapt tests for Travis CI
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Check filesystems on Travis environment
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Run remote/storage tests on CircleCI for troubleshooting
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Try using tmpfs partition
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Revert "Try using tmpfs partition"
This reverts commit 85a30deb72.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Don't store labels in writeToMock
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Fix data race
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Bump retries to 100 meaning that the total timeout is 10s
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* clean up .travis.yml
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* code fixup
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Remove unneeded empty line
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
- Use the queue name in WAL watcher logging.
- Don't return from watch if the reader error was EOF.
- Fix sample timestamp check logic regarding what samples we send.
- Refactor so we don't need readToEnd/readSeriesRecords
- Fix wal_watcher tests since readToEnd no longer exists
Signed-off-by: Callum Styan <callumstyan@gmail.com>
- Remove datarace in the exported highest scrape timestamp.
- Backoff on enqueue should be per-sample - reset the result for each sample.
- Remove diffKeys, unused ctx and cancelfunc in WALWatcher, 'name' from writeTo interface, and pass it to constructor.
- Reorder functions in WALWatcher depth-first according to call graph.
- Fix vendor/modules.txt.
- Split out the various timer periods into consts at the top of the file.
- Move w.currentSegmentMetric.Set close to where we set the currentSegment.
- Combine r.Next() and isClosed(w.quit) into a single loop.
- Unnest some ifs in WALWatcher.watch, propagate erros in decodeRecord, add some new lines to make it easier to read.
- Reorganise checkpoint handling to reduce nesting and make it easier to follow.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
This change switches the remote_write API to use the TSDB WAL. This should reduce memory usage and prevent sample loss when the remote end point is down.
We use the new LiveReader from TSDB to tail WAL segments. Logic for finding the tracking segment is included in this PR. The WAL is tailed once for each remote_write endpoint specified. Reading from the segment is based on a ticker rather than relying on fsnotify write events, which were found to be complicated and unreliable in early prototypes.
Enqueuing a sample for sending via remote_write can now block, to provide back pressure. Queues are still required to acheive parallelism and batching. We have updated the queue config based on new defaults for queue capacity and pending samples values - much smaller values are now possible. The remote_write resharding code has been updated to prevent deadlocks, and extra tests have been added for these cases.
As part of this change, we attempt to guarantee that samples are not lost; however this initial version doesn't guarantee this across Prometheus restarts or non-retryable errors from the remote end (eg 400s).
This changes also includes the following optimisations:
- only marshal the proto request once, not once per retry
- maintain a single copy of the labels for given series to reduce GC pressure
Other minor tweaks:
- only reshard if we've also successfully sent recently
- add pending samples, latest sent timestamp, WAL events processed metrics
Co-authored-by: Chris Marchbanks <csmarchbanks.com> (initial prototype)
Co-authored-by: Tom Wilkie <tom.wilkie@gmail.com> (sharding changes)
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* *: use latest release of staticcheck
It also fixes a couple of things in the code flagged by the additional
checks.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Use official release of staticcheck
Also run 'go list' before staticcheck to avoid failures when downloading packages.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
Currently Prometheus requests show up with a UA of Go-http-client/1.1
which isn't super helpful. Though the X-Prometheus-Remote-* headers
exist they need to be explicitly configured when logging the request in
order to be able to deduce this is a request originating from
Prometheus. By setting the header we remove this ambiguity and make
default server logs just a bit more useful.
This also updates a few other places to consistently capitalize the 'P'
in the user agent, as well as ensure we set a UA to begin with.
Signed-off-by: Daniele Sluijters <daenney@users.noreply.github.com>
* *: remove use of golang.org/x/net/context
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* scrape: fix TestTargetScrapeScrapeCancel
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Limit the number of samples remote read can return.
- Return 413 entity too large.
- Limit can be set be a flag. Allow 0 to mean no limit.
- Include limit in error message.
- Set default limit to 50M (* 16 bytes = 800MB).
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
There are many more (mostly finalizers like Close/Stop/etc.), but most of
the others seemed like one couldn't do much about them anyway.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Add Start/End to SelectParams
* Make remote read use the new selectParams for start/end
This commit will continue sending the start/end time of the remote read
query as the overarching promql time and the specific range of data that
the query is intersted in receiving a response to is now part of the
ReadHints (upstream discussion in #4226).
* Remove unused vendored code
The genproto.sh script was updated, but the code wasn't regenerated.
This simply removes the vendored deps that are no longer part of the
codegen output.
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
This commit fixes a denial-of-service issue of the remote
read endpoint. It limits the size of the POST request body
to 32 MB such that clients cannot write arbitrary amounts
of data to the server memory.
Fixes#4238
Signed-off-by: Andreas Auernhammer <aead@mail.de>
More than one remote_write destination can be configured, in which
case it's essential to know which one each log message refers to.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This adds a parameter to the storage selection interface which allows
query engine(s) to pass information about the operations surrounding a
data selection.
This can for example be used by remote storage backends to infer the
correct downsampling aggregates that need to be provided.