* WIP implement WAL watcher reading via notifications over a channel from
the TSDB code
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Notify via head appenders Commit (finished all WAL logging) rather than
on each WAL Log call
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix misspelled Notify plus add a metric for dropped Write notifications
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Update tests to handle new notification pattern
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* this test maybe needs more time on windows?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* does this test need more time on windows as well?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* read timeout is already a time.Duration
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* remove mistakenly commited benchmark data files
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* address some review feedback
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix missed changes from previous commit
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix issues from wrapper function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* try fixing race condition in test by allowing tests to overwrite the
read ticker timeout instead of calling the Notify function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix linting
Signed-off-by: Callum Styan <callumstyan@gmail.com>
---------
Signed-off-by: Callum Styan <callumstyan@gmail.com>
`head.deleted` holds the WAL segment in use at the time each series was
removed from the head. At the end of `truncateWAL()` we will delete
all segments up to `last`, so we can drop any series that were last seen
in a segment at or before that point.
(same change in Prometheus Agent too)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In the past, every sample value was a float, so it was fine to call a
variable holding such a float "value" or "sample". With native
histograms, a sample might have a histogram value. And a histogram
value is still a value. Calling a float value just "value" or "sample"
or "V" is therefore misleading. Over the last few commits, I already
renamed many variables, but this cleans up a few more places where the
changes are more invasive.
Note that we do not to attempt naming in the JSON APIs or in the
protobufs. That would be quite a disruption. However, internally, we
can call variables as we want, and we should go with the option of
avoiding misunderstandings.
Signed-off-by: beorn7 <beorn@grafana.com>
* Use zeropool.Pool to workaround SA6002
I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.
While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.
This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.
Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Improve BenchmarkHeadAppender with more cases
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* A little copying is better than a little dependency
https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix imports order
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add license header
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Copyright should be on one of the first 3 lines
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use require.Equal for testing
I don't depend on testify in my lib, but here we have it available.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Avoid flaky test
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Also use zeropool for pointsPool in engine.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Without this fix, if snapshots were enabled, and wbl goes missing
between restarts, then TSDB does not recognize that there are ooo
mmap chunks on disk and we cannot query them until those chunks
are compacted into blocks.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
M-map chunks replayed on startup are discarded if there
was no WAL and no snapshot loaded, because there is no
series created in the Head that it can map to. So only
load m-map chunks from disk if there is either a snapshot
loaded or there is WAL on disk.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Adds `WALReplayConcurrency` as an option on tsdb `Options` and `HeadOptions`.
If it is not set or set <=0, then `GOMAXPROCS` is used, which matches the previous behaviour.
Signed-off-by: Yuri Nikolic <durica.nikolic@grafana.com>
* Export single ith test histogram generation functions
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Do not set counter reset hint for non-gauge histograms individually
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
* Apply suggestions from code review
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
---------
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
* tsdb: make sharding function a parameter
Instead of relying on `labels.Hash()`, which may change, have the
caller pass in a shard function if required.
For most purposes `tsdb.Options.ShardFunc` is used, but the compactor
may be created independently so `NewLeveledCompactorWithChunkSize` also
takes a shard function parameter.
Regular Prometheus, which does not use block sharding, will have this
parameter as nil.
Rename WithCache functions as WithOptions
Where they now have 2 or more extra parameters.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is a bit more conservative than we could be. As long as a chunk
isn't the first in a block, we can be pretty sure that the previous
chunk won't disappear. However, the incremental gain of returning
NotCounterReset in these cases is probably very small and might not be
worth the code complications.
Wwith this, we now also pay attention to an explicitly set counter
reset during ingestion. While the case doesn't show up in practice
yet, there could be scenarios where the metric source knows there was
a counter reset even if it might not be visible from the values in the
histogram. It is also useful for testing.
Signed-off-by: beorn7 <beorn@grafana.com>
Instead of relying on `labels.Hash()`, which may change, have the
caller pass in a shard function if required.
For most purposes `tsdb.Options.ShardFunc` is used, but the compactor
may be created independently so `NewLeveledCompactorWithChunkSize` also
takes a shard function parameter.
Regular Prometheus, which does not use block sharding, will have this
parameter as nil.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
When out-of-order is enabled, queries go through both Head and OOOHead,
and they both execute the same PostingsForMatchers call, as memSeries
are shared for both.
In some cases these calls can be heavy, and also frequent. We can
deduplicate those calls by using the PostingsForMatchers cache that we
already use for query sharding.
The usage of this cache can skip a newly appended series in the results
for the duration of the ttl.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>