Commit graph

83 commits

Author SHA1 Message Date
György Krajcsovits 9dbd100a5e Refactor solution to not repeat code
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-09-20 15:54:00 +02:00
György Krajcsovits 96d03b6f46 Fix duplicate sample detection at chunks size limit
Before cutting a new XOR chunk in case the chunk goes over the size
limit, check that the timestamp is in order and not equal or older
than the latest sample in the old chunk.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-09-20 14:49:56 +02:00
Justin Lei 8ef7dfdeeb
Add a chunk size limit in bytes (#12054)
Add a chunk size limit in bytes

This creates a hard cap for XOR chunks of 1024 bytes.

The limit for histogram chunk is also 1024 bytes, but it is a soft limit as a histogram has a dynamic size, and even a single one could be larger than 1024 bytes.

This also avoids cutting new histogram chunks if the existing chunk has fewer than 10 histograms yet. In that way, we are accepting "jumbo chunks" in order to have at least 10 histograms in a chunk, allowing compression to kick in.

Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-08-24 15:21:17 +02:00
beorn7 aa82fe198f tsdb: Fix histogram validation
So far, `ValidateHistogram` would not detect if the count did not
include the count in the zero bucket. This commit fixes the problem
and updates all the tests that have been undetected offenders so far.

Note that this problem would only ever create false negatives, so we
never falsely rejected to store a histogram because of it.

On the other hand, `ValidateFloatHistogram` has been to strict with
the count being at least as large as the sum of the counts in all the
buckets. Float precision issues could create false positives here, see
products of PromQL evaluations, it's actually quite hard to put an
upper limit no the floating point imprecision. Users could produce the
weirdest expressions, maxing out float precision problems. Therefore,
this commit simply removes that particular check from
`ValidateFloatHistogram`.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-08-22 23:04:01 +02:00
Oleg Zaytsev cd7d0b69a2
Check nil err first when committing (#12625)
The most common case is to have a nil error when appending series, so
let's check that first instead of checking the 3 error types first.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2023-08-01 14:04:45 +02:00
Łukasz Mierzwa 3c80963e81
Use a linked list for memSeries.headChunk (#11818)
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.

Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.

To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-07-31 11:10:24 +02:00
George Krajcsovits 6cd2d1621f
Hide histogram chunk append and reset header internals (#12352)
tsdb: Hide histogram chunk append and reset header internals

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
2023-07-26 15:08:16 +02:00
Justin Lei e73d8b2084 Also pass chunkOpts into appendPreprocessor
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-05-25 13:37:18 -07:00
Justin Lei 4c4454e4c9 Group args to append to memSeries in chunkOpts
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-05-25 13:12:46 -07:00
Justin Lei 89af351730
Remove samplesPerChunk from memSeries (#12390)
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-05-25 11:18:41 +02:00
Callum Styan 0d2108ad79
[tsdb] re-implement WAL watcher to read via a "notification" channel (#11949)
* WIP implement WAL watcher reading via notifications over a channel from
the TSDB code

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Notify via head appenders Commit (finished all WAL logging) rather than
on each WAL Log call

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix misspelled Notify plus add a metric for dropped Write notifications

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Update tests to handle new notification pattern

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* this test maybe needs more time on windows?

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* does this test need more time on windows as well?

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* read timeout is already a time.Duration

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* remove mistakenly commited benchmark data files

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* address some review feedback

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* fix missed changes from previous commit

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix issues from wrapper function

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* try fixing race condition in test by allowing tests to overwrite the
read ticker timeout instead of calling the Notify function

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* fix linting

Signed-off-by: Callum Styan <callumstyan@gmail.com>

---------

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2023-05-15 12:31:49 -07:00
Björn Rabenstein 37fe9b89dc
Merge pull request #12055 from leizor/leizor/prometheus/issues/12009
Adjust samplesPerChunk from 120 to 220
2023-05-10 14:45:12 +02:00
beorn7 5b53aa1108 style: Replace else if cascades with switch
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.

The exceptions that I have found in our codebase are just these two:

* The `if else` is followed by an additional statement before the next
  condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
  used. In this case, using `switch` would require tagging the `for`
  loop, which probably tips the balance.

Why are `switch` statements more readable?

For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.

I'm sure the aforemention wise coders can list even more reasons.

In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:22:31 +02:00
Justin Lei 052993414a Add storage.tsdb.samples-per-chunk flag
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-04-13 15:59:49 -07:00
Arve Knudsen cca7178a12 tsdb: Improve a couple of histogram documentation comments
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-04-07 18:06:27 +02:00
Justin Lei c770ba8047 Add comment linking to PR
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-04-06 09:19:32 -07:00
Justin Lei 79db04eb12 Adjust samplesPerChunk from 120 to 220
Signed-off-by: Justin Lei <justin.lei@grafana.com>
2023-04-06 09:19:32 -07:00
Oleg Zaytsev 6e2905a4d4
Use zeropool.Pool to workaround SA6002 (#12189)
* Use zeropool.Pool to workaround SA6002

I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.

While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.

This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.

Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Improve BenchmarkHeadAppender with more cases

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* A little copying is better than a little dependency

https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Fix imports order

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Add license header

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Copyright should be on one of the first 3 lines

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Use require.Equal for testing

I don't depend on testify in my lib, but here we have it available.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Avoid flaky test

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Also use zeropool for pointsPool in engine.go

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

---------

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2023-03-29 20:34:34 +01:00
Abhijit Mukherjee 8f6d5dcd45
Fix: getting rid of EncOOOXOR chunk encoding (#12111)
Signed-off-by: mabhi <abhijit.mukherjee@infracloud.io>
2023-03-16 15:53:47 +05:30
Vishal N 96ba6831ae Observe delta in seconds prometheus_tsdb_sample_ooo_delta
Signed-off-by: Vishal Nadagouda <vishalmn1996@gmail.com>
2023-02-21 18:55:09 +05:30
beorn7 1cfc8f65a3 histograms: Return actually useful counter reset hints
This is a bit more conservative than we could be. As long as a chunk
isn't the first in a block, we can be pretty sure that the previous
chunk won't disappear. However, the incremental gain of returning
NotCounterReset in these cases is probably very small and might not be
worth the code complications.

Wwith this, we now also pay attention to an explicitly set counter
reset during ingestion. While the case doesn't show up in practice
yet, there could be scenarios where the metric source knows there was
a counter reset even if it might not be visible from the values in the
histogram. It is also useful for testing.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-01-25 16:57:21 +01:00
beorn7 57c18420ab histograms: General readability tweaks
- Adjust doc comments to go1.19 style.
- Break down some overly long lines.
- Minor doc comment tweaks and fixes.
- Some renaming.

Some rationales for the last point:

I have renamed “interjections” into “inserts”, mostly because it is
shorter, and the word shows up a lot by now (and the concept is
cryptic enough to not obfuscate it even more with abbreviations).

I have also tried to find more descriptive naming for the “compare
spans” functions.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-01-19 13:26:42 +01:00
Mingjie Shao 78d3c4e823 tsdb: Fixed typo in Histogram
Signed-off-by: Mingjie Shao <com.jerryshao@jerryshao.com>
2023-01-16 18:13:45 +08:00
Ganesh Vernekar cb2be6e62f
Merge pull request #11779 from codesome/memseries-ooo
tsdb: Only initialise out-of-order fields when required
2023-01-16 10:58:05 +05:30
Ganesh Vernekar 38fa151a7c
tsdb: Only initialise out-of-order fields when required
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-01-12 20:29:16 +05:30
beorn7 6dcd03dbf3 tsdb: Add integer gauge histogram support
This follows what #11783 has done for float gauge histograms.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-01-11 13:28:43 +01:00
Ganesh Vernekar 609b12d719
tsdb: Support gauge float histogram with recoding of chunk
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2023-01-10 17:48:09 +05:30
Ganesh Vernekar 2820e327db
tsdb: Add staleness handling for FloatHistogram
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-12-28 14:48:39 +05:30
Ganesh Vernekar e555469ba1
tsdb: Remove isHistogramSeries from memSeries
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-12-28 14:31:55 +05:30
Marc Tudurí 9474610baf
Support FloatHistogram in TSDB (#11522)
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.

Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-12-28 14:25:07 +05:30
Bryan Boreham 543c318ec2 Update package tsdb for new labels.Labels type
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00
Bryan Boreham 6bdecf377c
Switch from 'sanity' to more inclusive lanuage (#9376)
* Switch from 'sanity' to more inclusive lanuage

"Removing ableist language in code is important; it helps to create and
maintain an environment that welcomes all developers of all backgrounds,
while emphasizing that we as developers select the most articulate,
precise, descriptive language we can rather than relying on metaphors.

The phrase sanity check is ableist, and unnecessarily references mental
health in our code bases. It denotes that people with mental illnesses
are inferior, wrong, or incorrect, and the phrase sanity continues to be
used by employers and other individuals to discriminate against these
people."

From https://gist.github.com/seanmhanson/fe370c2d8bd2b3228680e38899baf5cc

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-11-28 17:09:18 +00:00
Ganesh Vernekar 42633bd05c
Merge pull request #11485 from t00350320/prometheus-office
GetRefByhash() will query a label's ref with hash value rather than lset.Hash().
2022-11-16 15:09:49 +01:00
tanghengjian 982007ecab
GetRefByhash will query a label's ref with hash value rather than lset.Hash().
Signed-off-by: tanghengjian <1040104807@qq.com>
2022-11-16 14:13:59 +01:00
Signed-off-by: Jesus Vazquez 3362bf6d79
Fix merge conflicts
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-10-11 22:53:37 +05:30
Jesus Vazquez e934d0f011 Merge 'main' into sparsehistogram
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
2022-10-05 22:14:49 +02:00
Ganesh Vernekar d0a6488c74
Update metrics for histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-10-03 13:48:59 +05:30
Bryan Boreham d166da7b59
tsdb: stop saving a copy of last 4 samples in memSeries (#11296)
* TSDB chunks: remove race between writing and reading

Because the data is stored as a bit-stream, the last byte in the stream
could change if the stream is appended to after an Iterator is obtained.
Copy the last byte when the Iterator is created, so we don't have to
read it later.

Clarify in comments that concurrent Iterator and Appender are allowed,
but the chunk must not be modified while an Iterator is created.
(This was already the case, in order to copy the bstream slice header.)

* TSDB: stop saving last 4 samples in memSeries

This extra copy of the last 4 samples was introduced to avoid a race
condition between reading the last byte of the chunk and writing to it.

But now we have fixed that by having `bstreamReader` copy the last byte,
we don't need to copy the last 4 samples.

This change saves 56 bytes per series, which is very worthwhile when
you have millions or tens of millions of series.

* TSDB: tidy up stopIterator re-use

Previous changes have left this code duplicating some lines; pull
them out to a separate function and tidy up.

* TSDB head_test: stop checking when iterators are wrapped

The behaviour has changed so chunk iterators are only wrapped when
transaction isolation requires them to stop short of the end.
This makes tests fail which are checking the type.

Tests should check the observable behaviour, not the type.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-27 19:32:05 +05:30
Bryan Boreham ff00dee262
tsdb: turn off transaction isolation for head compaction (#11317)
* tsdb: add a basic test for read/write isolation

* tsdb: store the min time with isolationAppender
So that we can see when appending has moved past a certain point in time.

* tsdb: allow RangeHead to have isolation disabled
This will be used when for head compaction.

* tsdb: do head compaction with isolation disabled
This saves a lot of work tracking appends done while compaction is ongoing.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-09-27 19:31:23 +05:30
Bryan Boreham d0607435a2
tsdb: remove chunkRange and oooCapMax from memSeries (#11288)
* tsdb: remove chunkRange from memSeries

chunkRange is the (oddly-named) configured duration for the head block.

We don't need a copy of this value per series. Pass it down where
required, and remove the copy.

The value in `Head` is only updated in `resetInMemoryState()`, which
also discards all `memSeries`.

* tsdb: remove oooCapMax from memSeries

oooCapMax is the configured maximum capacity for an out-of-order chunk.

Storing it per-series uses extra memory, and has surprising behaviour
if users change the value in config - series created before the change
will keep their old value.

Instead, pass it down where required, and remove the per-series value.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-27 13:52:22 +05:30
Jesus Vazquez c1b669bf9b
Add out-of-order sample support to the TSDB (#11075)
* Introduce out-of-order TSDB support

This implementation is based on this design doc:
https://docs.google.com/document/d/1Kppm7qL9C-BJB1j6yb6-9ObG3AbdZnFUBYPNNWwDBYM/edit?usp=sharing

This commit adds support to accept out-of-order ("OOO") sample into the TSDB
up to a configurable time allowance. If OOO is enabled, overlapping querying
are automatically enabled.

Most of the additions have been borrowed from
https://github.com/grafana/mimir-prometheus/
Here is the list ist of the original commits cherry picked
from mimir-prometheus into this branch:
- 4b2198d7ec
- 2836e5513f
- 00b379c3a5
- ff0dc75758
- a632c73352
- c6f3d4ab33
- 5e8406a1d4
- abde1e0ba1
- e70e769889
- df59320886

Co-authored-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Dieter Plaetinck <dieter@grafana.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* gofumpt files

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Add license header to missing files

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix OOO tests due to existing chunk disk mapper implementation

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix truncate int overflow

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Add Sync method to the WAL and update tests

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* remove useless sync

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Update minOOOTime after truncating Head

* Update minOOOTime after truncating Head

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix lint

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Add a unit test

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Load OutOfOrderTimeWindow only once per appender

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix OOO Head LabelValues and PostingsForMatchers

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix replay of OOO mmap chunks

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Remove unnecessary err check

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Prevent panic with ApplyConfig

Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Run OOO compaction after restart if there is OOO data from WBL

Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Apply Bartek's suggestions

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Refactor OOO compaction

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Address comments and TODOs

- Added a comment explaining why we need the allow overlapping
  compaction toggle
- Clarified TSDBConfig OutOfOrderTimeWindow doc
- Added an owner to all the TODOs in the code

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Run go format

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix remaining review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Change wbl reference when truncating ooo in TestHeadMinOOOTimeUpdate

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>

* Fix TestWBLAndMmapReplay test failure on windows

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Address most of the feedback

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Refactor the block meta for out of order

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix windows error

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Ganesh Vernekar 15064823+codesome@users.noreply.github.com
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2022-09-20 22:35:50 +05:30
Ganesh Vernekar 2474c6fb2c
Error on amending histograms on append (#11308)
* Error on amending histograms on append

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Rename Matches to Equals

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-19 13:10:30 +05:30
Ganesh Vernekar d354f20c2a
Add a feature flag to control native histogram ingestion (#11253)
* Add runtime config to control native histogram ingestion

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Make the config into a CLI flag

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-14 17:38:34 +05:30
Julien Pivotto ec6c1f17d1
Update dependencies (#11287)
Updating dependencies following CI changes and move to go 1.19

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2022-09-09 13:28:55 +02:00
Ganesh Vernekar f540c1dbd3
Add support for histograms in WAL checkpointing (#11210)
* Add support for histograms in WAL checkpointing

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 17:38:36 +05:30
Ganesh Vernekar d209a29a5b
Add unit test for histogram append and various querying scenarios (#11194)
* Add unit test for histogram append and various querying scenarios

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* make lint happy

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 15:35:03 +05:30
Łukasz Mierzwa 3196c98bc2
Reduce memSeries memory usage by decoupling metadata (#11152)
Metadata was added recently but doesn't seem to be used much, at least as far as I could identify.
Yet it's part of memSeries struct and so even when empty takes 48 bytes,
which is a lot given that without it memSeries requires 224 bytes.
This change turns it into a pointer on the struct, that get set only when metadata is actually set of given series.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-08-17 15:32:28 +05:30
beorn7 c9fd3c235d Merge branch 'main' into sparsehistogram 2022-08-10 17:54:37 +02:00
Levi Harrison 77a7af4461
Add histogram validation (#11052)
* Add histogram validation

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Correct negative offset validation

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Address review comments

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Validation benchmark

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Add more checks

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Attempt to fix tests

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Fix stuff

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-29 09:52:49 -05:00
Levi Harrison cb8582637a
Implement rollback for histograms (#11071)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-29 14:18:53 +05:30