* tsdb/agent: synchronize appender code with grafana/agent main
This commit synchronize the appender code with grafana/agent main. This
includes adding support for appending exemplars.
Closes#9610
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: fix build error
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: introduce some exemplar tests, refactor tests a little
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: address review feedback
- Re-use hash when creating a new series
- Fix typo in exemplar append error
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: remove unused AddFast method
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: close wal reader after test
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: add out-of-order tracking, change series TS in commit
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* address review feedback
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
There was a subtle and nasty bug in listSeriesIterator.Seek.
In addition, the Seek call is defined to be a no-op if the current
position of the iterator is already pointing to a suitable
sample. This commit adds fast paths for this case to several
potentially expensive Seek calls.
Another bug was in concreteSeriesIterator.Seek. It always searched the
whole series and not from the current position of the iterator.
Signed-off-by: beorn7 <beorn@grafana.com>
- Pick At... method via return value of Next/Seek.
- Do not clobber returned buckets.
- Add partial FloatHistogram suppert.
Note that the promql package is now _only_ dealing with
FloatHistograms, following the idea that PromQL only knows float
values.
As a byproduct, I have removed the histogramSeries metric. In my
understanding, series can have both float and histogram samples, so
that metric doesn't make sense anymore.
As another byproduct, I have converged the sampleBuf and the
histogramSampleBuf in memSeries into one. The sample type stored in
the sampleBuf has been extended to also contain histograms even before
this commit.
Signed-off-by: beorn7 <beorn@grafana.com>
* Support appending different sample types to the same series
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix comments
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix build
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix panic on WAL replay
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Refactor: introduce walSubsetProcessor
walSubsetProcessor packages up the `processWALSamples()` function and
its input and output channels, helping to clarify how these things
relate.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Refactor: extract more methods onto walSubsetProcessor
This makes the main logic easier to follow.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Fix race warning by locking processWALSamples
Although we have waited for the processor to finish, we still get a
warning from the race detector because it doesn't know how the different
parts relate.
Add a lock round each batch of samples, so the race detector can see
that we never access series owned by the processor outside of a lock.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Added test to reproduce issue 9859
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Remove redundant unit test
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix out of order chunks during WAL replay
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix nits
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Marco Pracucci <marco@pracucci.com>
Added validation to expected postings length compared to the bytes slice
length. With 32bit postings, we expect to have 4 bytes per each posting.
If the number doesn't add up, we know that the input data is not
compatible with our code (maybe it's cut, or padded with trash, or even
written in a different coded).
This is needed in downstream projects to correctly identify cached
postings written with an unknown codec, but it's also a good idea to
validate it here.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Disable isolation in isolation struct
Signed-off-by: darshanime <deathbullet@gmail.com>
* Run tsdb tests with isolation disabled
Signed-off-by: darshanime <deathbullet@gmail.com>
* Check for isolation disabled in isoState.Close()
Signed-off-by: darshanime <deathbullet@gmail.com>
* use t.Skip to skip isolation tests when disabled
Signed-off-by: darshanime <deathbullet@gmail.com>
* address review comments
Signed-off-by: darshanime <deathbullet@gmail.com>
* fix test for defaultIsolationState
Signed-off-by: darshanime <deathbullet@gmail.com>
* Change flag name. Set flag in DB. Do not init txRing. Close isoState.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Test disabled isolation in CircleCI test_go
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Skip isolation related tests in db_test.go
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Calling `wal.NewSegmentBufReader()` without any segments would cause a
`panic` resulting in prometheus crashing. This patch fixes the panic by
making segmentBufReader return a EOF if there are not segments.
This also means an empty checkpoint directory which should never be the
case unless it has been tampered with (or has issues due to the
underlying filesystem e.g. NFS) would be ignored by Prometheus and would
continue to run instead of the current behaviour which is to panic.
Fixes: https://github.com/prometheus/prometheus/issues/9605
Signed-off-by: Sunil Thaha <sthaha@redhat.com>
* Add basic initial developer docs for TSDB
There's a decent amount of content already out there (blog posts,
conference talks, etc), but:
* when they get stale, they don't tend to get updated
* they still leave me with questions that I'ld like to answer
for developers (like me) who want to use, or work with, TSDB
What I propose is developer docs inside the prometheus
repository. Easy to find and harness the power of the community
to expand it and keep it up to date.
* perfect is the enemy of good. Let's have a base and incrementally improve
* Markdown docs should be broad but not too deep. Source code comments
can complement them, and are the ideal place for implementation details.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* use example code that works out of the box
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Apply suggestions from code review
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* PR feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* more docs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* PR feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Apply suggestions from code review
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Apply suggestions from code review
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Update tsdb/docs/usage.md
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* final tweaks
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* workaround docs versioning issue
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Move example code to real executable, testable example.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* cleanup example test and make sure it always reproduces
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* obtain temp dir in a way that works with older Go versions
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Fix Ganesh's comments
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
This is to avoid copying the many fields of a histogram.Histogram all
the time.
This also fixes a bunch of formerly broken tests.
Signed-off-by: beorn7 <beorn@grafana.com>
* share tsdb db locker code with agent
Closes#9616
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* add flag to disable lockfile for agent
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* use agentOnlySetting instead of PreAction
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb: address review feedback
1. Rename Locker to DirLocker
2. Move DirLocker to tsdb/tsdbutil
3. Name metric using fmt.Sprintf
4. Refine error checking in DirLocker test
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb: create test utilities to assert expected DirLocker behavior
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/tsdbutil: fix lint errors
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: fix windows test failure
Use new DB variable instead of overriding the old one.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* Add more size checks when writing individual sections in the index.
Signed-off-by: Peter Štibraný <pstibrany@gmail.com>
* Use uint and add comment about it.
Signed-off-by: Peter Štibraný <pstibrany@gmail.com>
* Close agent db in tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Close first DB before opening second
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Use seperate variables for different DBs?
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Close remote storage
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix closing of stuff
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Remove the build flags after a rebase
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix closing of stuff 2
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Julien Pivotto <roidelapluie@inuits.eu>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
It seems sometimes you can get error like:
Error: Not equal:
expected: []byte(nil)
actual : []byte{}
Diff:
--- Expected
+++ Actual
@@ -1,2 +1,3 @@
-([]uint8) <nil>
+([]uint8) {
+}
This commit does what bytes.Equal does to silence those differences. I'm
not sure if this is a correct solution or just covering up the actual bug.
Closes#9574
Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
This creates a new `model` directory and moves all data-model related
packages over there:
exemplar labels relabel rulefmt textparse timestamp value
All the others are more or less utilities and have been moved to `util`:
gate logging modetimevfs pool runtime
Signed-off-by: beorn7 <beorn@grafana.com>
* TSDB: demistify seriesRefs and ChunkRefs
The TSDB package contains many types of series and chunk references,
all shrouded in uint types. Often the same uint value may
actually mean one of different types, in non-obvious ways.
This PR aims to clarify the code and help navigating to relevant docs,
usage, etc much quicker.
Concretely:
* Use appropriately named types and document their semantics and
relations.
* Make multiplexing and demuxing of types explicit
(on the boundaries between concrete implementations and generic
interfaces).
* Casting between different types should be free. None of the changes
should have any impact on how the code runs.
TODO: Implement BlockSeriesRef where appropriate (for a future PR)
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* agent: demistify seriesRefs and ChunkRefs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Allow to disable trimming when querying TSDB
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Addressed review comments
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added unit test
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Renamed TrimDisabled to DisableTrimming
Signed-off-by: Marco Pracucci <marco@pracucci.com>
We are aware of the issue, but while we are working on it,
having main tests broken is an annoyance.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Initial draft of prometheus-agent
This commit introduces a new binary, prometheus-agent, based on the
Grafana Agent code. It runs a WAL-only version of prometheus without the
TSDB, alerting, or rule evaluations. It is intended to be used to
remote_write to Prometheus or another remote_write receiver.
By default, prometheus-agent will listen on port 9095 to not collide
with the prometheus default of 9090.
Truncation of the WAL cooperates on a best-effort case with Remote
Write. Every time the WAL is truncated, the minimum timestamp of data to
truncate is determined by the lowest sent timestamp of all samples
across all remote_write endpoints. This gives loose guarantees that data
from the WAL will not try to be removed until the maximum sample
lifetime passes or remote_write starts functionining.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* add tests for Prometheus agent (#22)
* add tests for Prometheus agent
* add tests for Prometheus agent
* rearranged tests as per the review comments
* update tests for Agent
* changes as per code review comments
Signed-off-by: SriKrishna Paparaju <paparaju@gmail.com>
* incremental changes to prometheus agent
Signed-off-by: SriKrishna Paparaju <paparaju@gmail.com>
* changes as per code review comments
Signed-off-by: SriKrishna Paparaju <paparaju@gmail.com>
* Commit feedback from code review
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* Port over some comments from grafana/agent
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* Rename agent.Storage to agent.DB for tsdb consistency
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* Consolidate agentMode ifs in cmd/prometheus/main.go
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* Document PreAction usage requirements better for agent mode flags
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* remove unnecessary defaultListenAddr
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* `go fmt ./tsdb/agent` and fix lint errors
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
Co-authored-by: SriKrishna Paparaju <paparaju@gmail.com>
This guaranees that the zero threshold is byte-aligned. Not sure if
that helps in any way, but at least it won't harm.
Signed-off-by: beorn7 <beorn@grafana.com>
This adds bit buckets for larger numbers to varbit encoding and also
an unsigned version of varbit encoding.
Then, varbit encoding is used for all the histogram chunk data instead
of varint.
Signed-off-by: beorn7 <beorn@grafana.com>
* Use dedicated Ref type
Throughout the code base, there are reference types masked as
regular integers. Let's use dedicated types. They are
equivalent, but clearer semantically.
This also makes it trivial to find where they are used,
and from uses, find the centralized docs.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* postpone some work until after possible return
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* clarify
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* rename feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* skip header is up to caller
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
A lot of this code was hacked together, literally during a
hackathon. This commit intends not to change the code substantially,
but just make the code obey the usual style practices.
A (possibly incomplete) list of areas:
* Generally address linter warnings.
* The `pgk` directory is deprecated as per dev-summit. No new packages should
be added to it. I moved the new `pkg/histogram` package to `model`
anticipating what's proposed in #9478.
* Make the naming of the Sparse Histogram more consistent. Including
abbreviations, there were just too many names for it: SparseHistogram,
Histogram, Histo, hist, his, shs, h. The idea is to call it "Histogram" in
general. Only add "Sparse" if it is needed to avoid confusion with
conventional Histograms (which is rare because the TSDB really has no notion
of conventional Histograms). Use abbreviations only in local scope, and then
really abbreviate (not just removing three out of seven letters like in
"Histo"). This is in the spirit of
https://github.com/golang/go/wiki/CodeReviewComments#variable-names
* Several other minor name changes.
* A lot of formatting of doc comments. For one, following
https://github.com/golang/go/wiki/CodeReviewComments#comment-sentences
, but also layout question, anticipating how things will look like
when rendered by `godoc` (even where `godoc` doesn't render them
right now because they are for unexported types or not a doc comment
at all but just a normal code comment - consistency is queen!).
* Re-enabled `TestQueryLog` and `TestEndopints` (they pass now,
leaving them disabled was presumably an oversight).
* Bucket iterator for histogram.Histogram is now created with a
method.
* HistogramChunk.iterator now allows iterator recycling. (I think
@dieterbe only commented it out because he was confused by the
question in the comment.)
* HistogramAppender.Append panics now because we decided to treat
staleness marker differently.
Signed-off-by: beorn7 <beorn@grafana.com>
* Support stale samples for sparse histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Don't cut a new chunk for every stale sample
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Update comments for HistoAppender.Appendable
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix review comments
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Call delete on head if interval overlaps
Signed-off-by: darshanime <deathbullet@gmail.com>
* Garbage collect tombstones during head gc
Signed-off-by: darshanime <deathbullet@gmail.com>
* Truncate tombstones before min time during head gc
Signed-off-by: darshanime <deathbullet@gmail.com>
* Lock less by deleting all keys in a single pass
Signed-off-by: darshanime <deathbullet@gmail.com>
* Pass map to DeleteTombstones
Signed-off-by: darshanime <deathbullet@gmail.com>
* Create new slice to replace old one
Signed-off-by: darshanime <deathbullet@gmail.com>
* Decrement active_appenders metric when no samples added
Also add a test that the metric is incremented and decremented as
expected with and without samples.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Fix comment
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
This saves memory, effort and locking.
Since every symbol is also added to postings, `Symbols()` can be
implemented there instead. This now has to build a map for
deduplication, but `Symbols()` is only called for compaction, and `gc()`
used to rebuild the symbols map after every compaction so not an
additional cost.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Remove query hacks in the API and fix metrics
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Tests for the metrics
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Better way to count series on restart
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
The previous code re-used series IDs 1-1000 many times over, so a lot
of time was spent ensuring the lists of series were in ascending order.
The intended use of `MemPostings.Add()` is that all series IDs are
unique, and changing the benchmark to do this lets it finish ten times
faster.
(It doesn't affect the benchmark result, just the setup code)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* PromQL: Fix start and end keywords masking label and metric names
This commit fixes an issue with the "at modifier" that introduced two
new keywords: `start` and `end`. In grouping options and in metric
names, these keywords took precedence over metric or label names, so
that those metrics and labels could no longer be referenced.
Signed-off-by: Clayton Peters <clayton.peters@man.com>
* Add in additional tests for metrics and/or labels called start/end.
Signed-off-by: Clayton Peters <clayton.peters@man.com>
* *: Cut 2.29.0-rc.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* VERSION: bump to 2.29.0-rc.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Remove experimental wording on size-based retention
Followup of #9004
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix PR reference in changelog
Signed-off-by: George Brighton <george@gebn.co.uk>
* Describe EC2 availability zone IDs at most once per refresh (#9142)
Signed-off-by: George Brighton <george@gebn.co.uk>
* Describe EC2 availability zones at most once per SD load
Closes#9142.
Signed-off-by: George Brighton <george@gebn.co.uk>
* Incorporate feedback
Signed-off-by: George Brighton <george@gebn.co.uk>
* Integrate feedback
Signed-off-by: George Brighton <george@gebn.co.uk>
* Add a compatibility note for macOS users.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* *: Cut v2.29.0-rc.1
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Fix `kuma_sd` targetgroup reporting (#9157)
* Bundle all xDS targets into a single group
Signed-off-by: austin ce <austin.cawley@gmail.com>
* *: cut v2.29.0-rc.2
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Rename links
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* bump codemirror-promql to 0.17.0
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* *: cut v2.29.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* tsdb: align atomically accessed int64 (#9192)
This prevents a panic in 32-bit archs:
https://pkg.go.dev/sync/atomic#pkg-note-BUGFixed#9190
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Release 2.29.1 (#9193)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Co-authored-by: Clayton Peters <clayton.peters@man.com>
Co-authored-by: Frederic Branczyk <fbranczyk@gmail.com>
Co-authored-by: George Brighton <george@gebn.co.uk>
Co-authored-by: Austin Cawley-Edwards <austin.cawley@gmail.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Augustin Husson <husson.augustin@gmail.com>
* Fix `kuma_sd` targetgroup reporting (#9157)
* Bundle all xDS targets into a single group
Signed-off-by: austin ce <austin.cawley@gmail.com>
* Snapshot in-memory chunks on shutdown for faster restarts (#7229)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Rename links
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Remove Individual Data Type Caps in Per-shard Buffering for Remote Write (#8921)
* Moved everything to nPending buffer
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Simplify exemplar capacity addition
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added pre-allocation
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Don't allocate if not sending exemplars
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Avoid deadlock when processing duplicate series record (#9170)
* Avoid deadlock when processing duplicate series record
`processWALSamples()` needs to be able to send on its output channel
before it can read the input channel, so reads to allow this in case the
output channel is full.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* processWALSamples: update comment
Previous text seems to relate to an earlier implementation.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Optimise WAL loading by removing extra map and caching min-time (#9160)
* BenchmarkLoadWAL: close WAL after use
So that goroutines are stopped and resources released
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* BenchmarkLoadWAL: make series IDs co-prime with #workers
Series are distributed across workers by taking the modulus of the
ID with the number of workers, so multiples of 100 are a poor choice.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* BenchmarkLoadWAL: simulate mmapped chunks
Real Prometheus cuts chunks every 120 samples, then skips those samples
when re-reading the WAL. Simulate this by creating a single mapped chunk
for each series, since the max time is all the reader looks at.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Fix comment
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Remove series map from processWALSamples()
The locks that is commented to reduce contention in are now sharded
32,000 ways, so won't be contended. Removing the map saves memory and
goes just as fast.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* loadWAL: Cache the last mmapped chunk time
So we can skip calling append() for samples it will reject.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Improvements from code review
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Full stops and capitals on comments
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Cache max time in both places mmappedChunks is updated
Including refactor to extract function `setMMappedChunks`, to reduce
code duplication.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update head min/max time when mmapped chunks added
This ensures we have the correct values if no WAL samples are added for
that series.
Note that `mSeries.maxTime()` was always `math.MinInt64` before, since
that function doesn't consider mmapped chunks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Split Go and React Tests (#8897)
* Added go-ci and react-ci
Co-authored-by: Julien Pivotto <roidelapluie@inuits.eu>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Remove search keymap from new expression editor (#9184)
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Co-authored-by: Austin Cawley-Edwards <austin.cawley@gmail.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julien Pivotto <roidelapluie@inuits.eu>
Co-authored-by: Bryan Boreham <bjboreham@gmail.com>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
* BenchmarkLoadWAL: close WAL after use
So that goroutines are stopped and resources released
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* BenchmarkLoadWAL: make series IDs co-prime with #workers
Series are distributed across workers by taking the modulus of the
ID with the number of workers, so multiples of 100 are a poor choice.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* BenchmarkLoadWAL: simulate mmapped chunks
Real Prometheus cuts chunks every 120 samples, then skips those samples
when re-reading the WAL. Simulate this by creating a single mapped chunk
for each series, since the max time is all the reader looks at.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Fix comment
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Remove series map from processWALSamples()
The locks that is commented to reduce contention in are now sharded
32,000 ways, so won't be contended. Removing the map saves memory and
goes just as fast.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* loadWAL: Cache the last mmapped chunk time
So we can skip calling append() for samples it will reject.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Improvements from code review
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Full stops and capitals on comments
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Cache max time in both places mmappedChunks is updated
Including refactor to extract function `setMMappedChunks`, to reduce
code duplication.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update head min/max time when mmapped chunks added
This ensures we have the correct values if no WAL samples are added for
that series.
Note that `mSeries.maxTime()` was always `math.MinInt64` before, since
that function doesn't consider mmapped chunks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Avoid deadlock when processing duplicate series record
`processWALSamples()` needs to be able to send on its output channel
before it can read the input channel, so reads to allow this in case the
output channel is full.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* processWALSamples: update comment
Previous text seems to relate to an earlier implementation.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
I was struggling to understand the purpose of this method until I
tweaked the tests, so I decided to write down my observations.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Refactor: pass segment-reading function as param
To allow a different implementation to be used when garbage-collecting.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* remote_write: reduce blocking from GC of series
Add a method `UpdateSeriesSegment()` which is used together with
`SeriesReset()` to garbage-collect old series. This allows us to
split the lock around queueManager series data and avoid blocking
`Append()` while reading series from the last checkpoint.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Cosmetic: review feedback on comments
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* remote-write benchmark: include GC of series
Reduce the total number of samples per iteration from 5000*5000
(25 million) which is too big for my laptop, to 1*10000.
Extend `createTimeseries()` to add additional labels, so that the
queue manager is doing more realistic work.
Move the Append() call to a background goroutine - this works because
TestWriteClient uses a WaitGroup to signal completion.
Call `StoreSeries()` and `SeriesReset()` while adding samples, to
simulate the garbage-collection that wal.Watcher does.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Change BenchmarkSampleDelivery to call UpdateSeriesSegment
This matches what Watcher.garbageCollectSeries() is doing now
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* tsdb: use dennwc/varint to speed up decoding
This is a tiny library, MIT-licensed, which unrolls the loop to go
about twice as fast.
Needed to copy the sign-inverting logic inline, previously provided by
the `binary` package.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* More comments to explain varint decoding
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Previously it was allocating millions of chunks, all containing the
same 250 samples. Above some ratio of CPU performance to available
memory, the benchmark cannot run.
Make 250 a const and just allocate one chunk which we iterate
repeatedly till we reach the benchmark count.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Push the matchers for LabelNames all the way into the index.
NB This doesn't actually implement it in the index, just plumbs it through for now...
Signed-off-by: Tom Wilkie <tom@grafana.com>
* Hack it up. Does not work.
Signed-off-by: Tom Wilkie <tom@grafana.com>
* Revert changes I don't understand
Can't see why do we need to hold a mutex on symbols, and the purpose of
the LabelNamesFor method.
Maybe I'll need to re-add this later.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Implement LabelNamesFor
This method provides the label names that appear in the postings
provided. We do that deeper than the label values because we know
beforehand that most of the label names we'll be the same across
different postings, and we don't want to go down an up looking up the
same symbols for all different series.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Mutex on symbols should be unlocked
However, I still don't understand why do we need a mutex here.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix head.LabelNamesFor
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Implement mockIndex LabelNames with matchers
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Nitpick on slice initialisation
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add tests for LabelNamesWithMatchers
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix the mutex mess on head.LabelValues/LabelNames
I still don't see why we need to grab that unrelated mutex, but at least
now we're grabbing it consistently
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Check error after iterating postings
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use the error from posting when there was en error in postings
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Update storage/interface.go comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go wrapped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go wrapped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go warpped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Remove unneeded comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add testcases for LabelNames w/matchers in api.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use t.Cleanup() instead of defer in tests
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Tom Wilkie <tom@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Compaction fails when total symbol size exceeds 2^32 bytes.
Signed-off-by: tanghengjian <1040104807@qq.com>
* Compaction fails when total symbol size exceeds 2^32 bytes.
Signed-off-by: tanghengjian <1040104807@qq.com>
* Compaction fails when total symbol size exceeds 2^32 bytes.
Signed-off-by: root <tanghengjian@oppo.com>
Co-authored-by: root <tanghengjian@oppo.com>
* Create experimental circular buffer resize method, benchmarks
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Optimize exemplar resize to only replay as many exemplars as needed
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* More comments, benchmark AddExemplar
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* optimizations
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* comment
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Slight refactor of resize benchmark + make use of resize via runtime
reloadable storage config.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Some more config related changes.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address some review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address more review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Refactor to remove usage of noopExemplarStorage and avoid race condition
when resizing from Head code.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix or add comments to clarify some of the new behaviour.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix potential panics related to negative exemplar buffer lengths
Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Callum Styan <callumstyan@gmail.com>
Fetch the low watermark value under the same lock as we need for the
appender, rather than releasing then re-aquiring a lock on the same
Mutex.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
An opinionated cosmetic change, but since go 1.13 we have this fancy
0b.... literals so we don't need to write hex and comment the binary
value.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Update querier.go to support Head compaction with histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Add test for Head compaction with histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix tests
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Do not panic on histoAppender.Append
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* M-map all chunks on shutdown
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Support negative schema for querying
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* bucketIterator which returns all valid bucket indices for a []span
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* support for comparing []spans and generating interjections
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add license header
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* assert order fix
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* handle pathological 0-length span case more gracefully
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* stale todo
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* decode-recode histograms when new buckets appear
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* factor out recoding and also add it to the fallback case
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* make linter happy
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* factor out different varbit schemes and include Beorn's "optimum" for buckets
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* use more compact dod encoding scheme for SHS chunk columns
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* remove FB VB and xor dod encoding because we won't use it
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* HistoChunk metadata encoding
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add SparseHistogram.Copy()
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* histogram test: test appending a few histograms
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add license headers
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Append sparse histograms into the Head block
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Add AtHistogram() to Iterator interface. Make HistoChunk conform to Chunk interface.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* integer types and timestamp separation
1) unify types to int64. as suggested by beorn. we want to support
counters going down (resets) even if we plan to create new chunks for
now, in that case
2) histogram type doesn't know its own timestamp. include it separately
in appending and iteration
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* correction: count and zeroCount to remain unsigned
to make api more resilient and that's what we use in protobuf anyway
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* temp hack. Ganesh will fix
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Add cleanup of the lockfile when the db is cleanly closed
The metric describes the status of the lockfile on startup
0: Already existed
1: Did not exist
-1: Disabled
Therefore, if the min value over time of this metric is 0, that means that executions have exited uncleanly
We can then use that metric to have a much lower threshold on the crashlooping alert:
If the metric exists and it has been zero, two restarts is enough to trigger the alarm
If it does not exist (old prom version for example), the current five restarts threshold remains
Signed-off-by: Julien Duchesne <julien.duchesne@grafana.com>
* Change metric name + set unset value to -1
Signed-off-by: Julien Duchesne <julien.duchesne@grafana.com>
* Only check the last value of the clean start alert
Signed-off-by: Julien Duchesne <julien.duchesne@grafana.com>
* Fix test + nit
Signed-off-by: Julien Duchesne <julien.duchesne@grafana.com>
* Added walreplay API endpoint
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added starting page to react-ui
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Documented the new endpoint
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed typos
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
* Removed logo
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed isResponding to isUnexpected
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed width of progress bar
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed width of progress bar
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added DB stats object
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Updated starting page to work with new fields
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 2)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 3)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (and also implementing a method this time) (pt. 4)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (and also implementing a method this time) (pt. 5)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed const to let
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 6)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Remove SetStats method
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added comma
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed api
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed to triple equals
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed data response types
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Don't return pointer
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed version
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed interface issue
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed pointer
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed copying lock value error
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
It's common to see queries like bar=~"foo" from machine generated
queries in the fronted. These are not evaluated as regexps, but are a
single-value-set, i.e. and equality matchings instead.
This is just a testcase for a single-value case.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Write exemplars to the WAL and send them over remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Update example for exemplars, print data in a more obvious format.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add metrics for remote write of exemplars.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix incorrect slices passed to send in remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* We need to unregister the new metrics.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Order of exemplar append vs write exemplar to WAL needs to change.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Several fixes to prevent sending uninitialized or incorrect samples with an exemplar. Fix dropping exemplar for missing series. Add tests for queue_manager sending exemplars
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Store both samples and exemplars in the same timeseries buffer to remove the alloc when building final request, keep sub-slices in separate buffers for re-use
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Condense sample/exemplar delivery tests to parameterized sub-tests
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Rename test methods for clarity now that they also handle exemplars
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Rename counter variable. Fix instances where metrics were not updated correctly
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Add exemplars to LoadWAL benchmark
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* last exemplars timestamp metric needs to convert value to seconds with
ms precision
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Process exemplar records in a separate go routine when loading the WAL.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments related to clarifying comments and variable
names. Also refactor sample/exemplar to enqueue prompb types.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Regenerate types proto with comments, update protoc version again.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Put remote write of exemplars behind a feature flag.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address some of Ganesh's review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Move exemplar remote write feature flag to a config file field.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address Bartek's review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Don't allocate exemplar buffers in queue_manager if we're not going to
send exemplars over remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add ValidateExemplar function, validate exemplars when appending to head
and log them all to WAL before adding them to exemplar storage.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address more reivew comments from Ganesh.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add exemplar total label length check.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address a few last review comments
Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Martin Disibio <mdisibio@gmail.com>
* Added test to reproduce panic on TSDB head chunks truncated while querying
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added test for Querier too
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Stop the bleed on mmap-ed head chunks panic
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Lower memory pressure in tests to ensure it doesn't OOM
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Skip TestQuerier_ShouldNotPanicIfHeadChunkIsTruncatedWhileReadingQueriedChunks
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Experiment to not trigger runtime.GC() continuously
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Try to fix test in CI
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Do not call runtime.GC() at all
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* I have no idea why it's failing in CI, skipping tests
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Snappy cannot encode records larger than ~3.7 GB and will panic if an
encoding is attempted. Check to make sure that the record is smaller
than this before encoding.
In the future, we could improve this behavior to still compress large
records (or break them up into smaller records), but this avoids the
panic for users with very large single scrape targets.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* Add range query test cases
This includes a couple of failing ones that double count some points due
to the iterator seek bug.
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Add Seek() implementation for memSafeIterator
Previously, calling memSafeIterator.Seek() would call the Seek() method
on its embedded iterator. This was causing the embedded iterator and the
memSafeIterator to get out of sync because when the embedded Seek()
moved to the next element of the embedded iterator, memSafeIterator
didn't "know" about it. memSafeIterator has to "know" when the embedded
iterator has moved to be able to work out when it should be reading from
its buffer rather than the embedded iterator.
Used same logic as for xorIterator.Seek() (which in runtime is used as
the embedded iterator) - return false if the iterator has an error and
try to move to next element if the required time hasn't been reached, or
if no elements have been read yet. The memSafeIterator.Next() method is
being called so memSafeIterator.i is always accurate.
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
* Add tsdb package test
Signed-off-by: Fiona Liao <fiona.y.liao@gmail.com>
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
The purpose of GetRef() is to allow Append() to be called without
the caller needing to copy the labels. To avoid a race where a series
is removed from TSDB between the calls to GetRef() and Append(), we
return TSDB's copy of the labels.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add method to get reference number for TSDB Appender
In situations where we need to copy labels before calling Add(),
GetRef() allows to check first, then call AddFast() in the case that the series
is already known.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add explicit interface for GetRef() method
Suggested in code review by @bwplotka
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Rename OptionalGetRef to GetRef
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Simplify return value of GetRef()
0 can be relied on to mean 'no reference'
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The main branch tests are not passing due to the fact that #8489 was not
rebased on top of #8007.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
Right now a new segment might be created unnecessarily if the
uncompressed record would not fit, but after compression (typically
reducing record size in half) it would.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* CleanupTombstones refactored, now reloading blocks after every compaction.
The goal is to remove deletable blocks after every compaction and, thus, decrease disk space used when cleaning tombstones.
Signed-off-by: arthursens <arthursens2005@gmail.com>
* Protect DB against parallel reloads
Signed-off-by: ArthurSens <arthursens2005@gmail.com>
* Fix typos
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
Co-authored-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
In the previous version, 1.18.0, the "megacheck" linter paid attention
to the '//lint:ignore' comment, but that is no longer there.
Newer version pay attention to '//nolint:<linter>,<linter>,...'
comments, optionally followed by a "second" comment introduced by '//'.
Update the directives to use this style.
This is related to prometheus/blackbox_exporter#738 and
prometheus/blackbox_exporter#745.
Signed-off-by: Marcelo E. Magallon <marcelo.magallon@grafana.com>
We're seeing compactions that are taking hours in Cortex which this is
missing. I know while it is not common in Prometheus, I am pretty sure
there are setups where compaction takes longer than 512s. On our own
Prometheus the average compaction duration is 566s.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Fix TSDB head struct dump on querier error
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added mint/maxt to RangeHead.String()
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* test: cleanup tempdir for TestBlockWriter
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
* test: cleanup tempdir for TestLogPartialWrite
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
* fix: remove pre-2.21 tmp blocks on start
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: commenting
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* tsdb: Expose total number of label pairs in head
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: add comment for NumLabelPairs
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>
* fix: remove comment
Signed-off-by: Nguyen Le Vu Long <vulongvn98@gmail.com>