Commit graph

578 commits

Author SHA1 Message Date
Ganesh Vernekar d0a6488c74
Update metrics for histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-10-03 13:48:59 +05:30
Ganesh Vernekar 758e29258b
Add/Improve unit tests for compaction with histogram Part 2 (#11343)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-23 14:01:10 +05:30
Ganesh Vernekar 2474c6fb2c
Error on amending histograms on append (#11308)
* Error on amending histograms on append

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Rename Matches to Equals

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-19 13:10:30 +05:30
Björn Rabenstein 7ad36505d5
tsdb: Update comment about a possible space optimization (#11303)
See also #11195 for the detailed reasoning.

Signed-off-by: beorn7 <beorn@grafana.com>

Signed-off-by: beorn7 <beorn@grafana.com>
2022-09-15 13:11:57 +05:30
Ganesh Vernekar d354f20c2a
Add a feature flag to control native histogram ingestion (#11253)
* Add runtime config to control native histogram ingestion

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Make the config into a CLI flag

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-14 17:38:34 +05:30
Ganesh Vernekar b2d01cbc57
Remove unnecessary code in encoding/decoding histograms (#11252)
* Remove unnecessary code in encoding/decoding histograms

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-09-13 19:30:54 +05:30
Ganesh Vernekar 8f755f8f35
Extend createHead in tests to support histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 20:18:02 +05:30
Ganesh Vernekar f540c1dbd3
Add support for histograms in WAL checkpointing (#11210)
* Add support for histograms in WAL checkpointing

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 17:38:36 +05:30
Ganesh Vernekar 6383994f3e
Improve WAL/mmap chunks test for histograms (#11208)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 16:21:32 +05:30
Ganesh Vernekar d209a29a5b
Add unit test for histogram append and various querying scenarios (#11194)
* Add unit test for histogram append and various querying scenarios

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* make lint happy

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix tests

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-29 15:35:03 +05:30
Ganesh Vernekar 0f4e5196c4
Implement vertical compaction for native histograms (#11184)
* Implement vertical compaction for native histograms

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix typo

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-08-22 19:04:39 +05:30
beorn7 c9fd3c235d Merge branch 'main' into sparsehistogram 2022-08-10 17:54:37 +02:00
Xiaochao Dong 1078081aec
Fix race condition when updating lastSeriesID during loading chunk snapshot (#11099)
Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
2022-08-04 13:39:14 +05:30
Levi Harrison 77a7af4461
Add histogram validation (#11052)
* Add histogram validation

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Correct negative offset validation

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Address review comments

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Validation benchmark

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Add more checks

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Attempt to fix tests

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Fix stuff

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-29 09:52:49 -05:00
Levi Harrison cb8582637a
Implement rollback for histograms (#11071)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-29 14:18:53 +05:30
Bryan Boreham 00ec720c29
tsdb: extract functions to encode and decode labels (#11045)
* tsdb/record: Extract functions to encode and decode labels

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* tsdb: make use of Encode/Decode Labels

Simplify the code by re-using routines from tsdb/record.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-07-26 20:12:00 +05:30
Paschalis Tsilias a0f7c31c26
Fix type byte of WAL metadata records in docs (#11035)
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
2022-07-19 18:22:02 +05:30
Paschalis Tsilias d1122e0743
Introduce TSDB changes for appending metadata to the WAL (#10972)
* Append metadata to the WAL

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Remove extra whitespace; Reword some docstrings and comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use RLock() for hasNewMetadata check

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use single byte for metric type in RefMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Update proposed WAL format for single-byte type metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Implementa MetadataAppender interface for the Agent

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Address first round of review comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Amend description of metadata in wal.md

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Correct key used to retrieve metadata from cache

When we're setting metadata entries in the scrapeCace, we're using the
p.Help(), p.Unit(), p.Type() helpers, which retrieve the series name and
use it as the cache key. When checking for cache entries though, we used
p.Series() as the key, which included the metric name _with_ its labels.
That meant that we were never actually hitting the cache. We're fixing
this by utiling the __name__ internal label for correctly getting the
cache entries after they've been set by setHelp(), setType() or
setUnit().

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Put feature behind a feature flag

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix AppendMetadata docstring

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reorder WAL format document

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Change error message of AppendMetadata; Fix access of s.meta in AppendMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reuse temporary buffer in Metadata encoder

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Only keep latest metadata for each refID during checkpointing

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix test that's referencing decoding metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Avoid creating metadata block if no new metadata are present

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add tests for corrupt metadata block and relevant record type

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix CR comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Extract logic about changing metadata in an anonymous function

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Implement new proposed WAL format and amend relevant tests

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use 'const' for metadata field names

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Apply metadata to head memSeries in Commit, not in AppendMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add docstring and rename extracted helper in scrape.go

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add tests for tsdb-related cases

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linter issues vol1

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linter issues vol2

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix Windows test by closing WAL reader files

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Use switch instead of two if statements in metadata decoding

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix review comments around TestMetadata* tests

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Add code for replaying WAL; test correctness of in-memory data after a replay

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Remove scrape-loop related code from PR

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Address first round of comments

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Simplify tests by sorting slices before comparison

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix test to use separate transactions

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Empty out buffer and record slices after encoding latest metadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix linting issue

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Update calculation for DroppedMetadata metric

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Rename MetadataAppender interface and AppendMetadata method to MetadataUpdater/UpdateMetadata

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Reuse buffer when encoding latest metadata for each series

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Fix review comments; Check all returned error values using two helpers

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Simplify use of helpers

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>

* Satisfy linter

Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
2022-07-19 10:58:52 +02:00
ZhangJian He 95b7d058ac
Fix markdown syntax in tsdb index.md (#11032)
* Fix markdown syntax in tsdb index.md

Signed-off-by: ZhangJian He <shoothzj@gmail.com>

* Update tsdb/docs/format/index.md

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

Signed-off-by: ZhangJian He <shoothzj@gmail.com>
2022-07-18 04:28:52 -07:00
Levi Harrison 3d538351f6
Recognize exemplar record type in WAL watcher metrics (#11008)
* Add exemplar record case

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* recordType() -> Type.String()

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-18 15:54:11 +05:30
Levi Harrison 08f3ddb864
Sparse histogram remote-write support (#11001) 2022-07-14 09:13:12 -04:00
beorn7 3ce988b031 Merge branch 'sparsehistogram' into beorn7/sparsehistogram 2022-07-13 18:07:54 +02:00
beorn7 28f028e938 Merge branch 'main' into sparsehistogram 2022-07-12 19:07:13 +02:00
beorn7 5d14046d28 tsdb: Fix chunk handling during appendHistogram
Previously, the maxTime wasn't updated properly in case of a recoding
happening.

My apologies for reformatting many lines for line length. During the
bug hunt, I tried to make things more readable in a reasonably wide
editor window.

Signed-off-by: beorn7 <beorn@grafana.com>
2022-07-06 18:44:53 +02:00
beorn7 642c5758ff tsdb: Expose histogram append bug
Signed-off-by: beorn7 <beorn@grafana.com>
2022-07-06 18:44:45 +02:00
beorn7 49be0784b4 tsdb: Fix chunk handling during histogram recoding
Previously, the maxTime wasn't updated properly in case of a recoding
happening.

My apologies for reformatting many lines for line length. During the
bug hunt, I tried to make things more readable in a reasonably wide
editor window.

Signed-off-by: beorn7 <beorn@grafana.com>
2022-07-06 14:34:02 +02:00
Jesus Vazquez 6cfe44d7fd
WaitUntilIdle optimize idling time (#10878)
Relates to @bboreham optimization in https://github.com/prometheus/prometheus/pull/10859

Bryan did reduce the sleep time improving the deltas on the benchmark by
quite a lot. However I've been working on a similar implementation for
out of order and I noticed that we actually get into this method
thousands of times.

@ywwg had the brilliant idea of not always sleeping before the select
but actually make it a case in the select so we only sleep if we need
to.

The benchmark deltas are amazing

```
❯ benchstat old_implementation.txt new_implementation_using_time_after.txt
name                                                                                                     old time/op  new time/op  delta
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=0,mmappedChunkT=0-8        521ms ±25%   253ms ± 6%  -51.47%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=36,mmappedChunkT=0-8       773ms ± 3%   369ms ±31%  -52.23%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=72,mmappedChunkT=0-8       592ms ±28%   297ms ±28%  -49.80%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200,exemplarsPerSeries=360,mmappedChunkT=0-8      547ms ± 2%  999ms ±187%     ~     (p=0.690 n=5+5)
LoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50,exemplarsPerSeries=0,mmappedChunkT=0-8        11.3s ± 4%    1.3s ±44%  -88.48%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50,exemplarsPerSeries=2,mmappedChunkT=0-8        11.1s ± 1%    1.2s ±20%  -89.08%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=0,mmappedChunkT=0-8        1.24s ± 3%   0.18s ± 7%  -85.76%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=2,mmappedChunkT=0-8        1.24s ± 2%   0.18s ± 5%  -85.24%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=5,mmappedChunkT=0-8        1.23s ± 5%   0.27s ±33%  -77.73%  (p=0.008 n=5+5)
LoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=24,mmappedChunkT=0-8       1.28s ± 1%   0.36s ± 7%  -71.51%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=0,mmappedChunkT=3800-8    12.1s ± 1%    3.1s ± 6%  -74.33%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=2,mmappedChunkT=3800-8    12.1s ± 1%    3.4s ± 4%  -71.94%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=5,mmappedChunkT=3800-8    12.1s ± 1%    3.8s ±17%  -68.35%  (p=0.008 n=5+5)
LoadWAL/batches=100,seriesPerBatch=1000,samplesPerSeries=480,exemplarsPerSeries=24,mmappedChunkT=3800-8   12.4s ± 1%    4.0s ±18%  -67.71%  (p=0.008 n=5+5)
```

Benchmarked on Linux
```
goos: linux
goarch: amd64
pkg: github.com/prometheus/prometheus/tsdb
cpu: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
```

Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
2022-06-30 15:00:04 +02:00
Julien Pivotto bacd776356
Merge pull request #10907 from damnever/fix/panic
Fix panic if series is not found when deleting series
2022-06-30 11:23:08 +02:00
Peter Štibraný ffc60d8397
Reduce chunk write queue memory usage 2 (#10874)
* Job queue

This PR reimplements chan chunkWriteJob with custom buffered queue that should use less memory, because it doesn't preallocate entire buffer for maximum queue size at once. Instead it allocates individual "segments" with smaller size.

As elements are added to the queue, they fill individual segments. When elements are removed from the queue (and segments), empty segments can be thrown away. This doesn't change memory usage of the queue when it's full, but should decrease its memory footprint when it's empty (queue will keep max 1 segment in such case).

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>

* Modify test to work with low resolution timer.

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>

* Improve comments.

Signed-off-by: Peter Štibraný <pstibrany@gmail.com>
2022-06-29 17:51:27 +05:30
Xiaochao Dong (@damnever) 6b042da2d8 Fix panic if series is not found when deleting series
Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
2022-06-24 15:55:32 +08:00
Steve Azzopardi 04fe2c9522
fix(tsdb): inc mmap corruption counter on mmap out of sequence error (#10406)
What
---
When we see out of sequence chunks increase the chunk corruption counter
to indicate that one of the chunks was corrupted.

Reference: https://github.com/prometheus/prometheus/pull/10406#issuecomment-1142595527
Signed-off-by: Steve Azzopardi <steveazz@outlook.com>
2022-06-22 14:03:12 +05:30
Peter Štibraný 03a2313f7a
Reduce chunk write queue memory usage (#10873)
* dont waste space on the chunkRefMap
* add time factor
* add comments
* better readability
* add instrumentation and more comments
* formatting
* uppercase comments
* Address review feedback. Renamed "free" to "shrink" everywhere, updated comments and threshold to 1000.
* double space

Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
Co-authored-by: Peter Štibraný <pstibrany@gmail.com>

Co-authored-by: Mauro Stettler <mauro.stettler@gmail.com>
2022-06-17 13:11:39 +05:30
Bryan Boreham 9f77d23889
tsdb: commit data periodically in CreateBlock (#10788)
To avoid building up data in memory, commit and make a new appender
periodically.

The number `commitAfter = 10000` was chosen arbitrarily; testing with
10x more or less gives slightly worse results.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-17 11:26:19 +05:30
Łukasz Mierzwa d65f037def
Don't increment prometheus_tsdb_compactions_failed_total when context is canceled (#10772)
When restarting Prometheus I sometimes see:

caller=db.go:832 level=error component=tsdb msg="compaction failed" err="compact head: persist head block: 2 errors: populate block: context canceled; context canceled"

And prometheus_tsdb_compactions_failed_total metric gets incremented.
This makes it more difficult to write alerts based on
prometheus_tsdb_compactions_failed_total metric since any restart can
trigger it.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-06-17 11:21:43 +05:30
beorn7 095b6c93dd Merge branch 'main' into sparsehistogram 2022-06-14 14:27:35 +02:00
Bryan Boreham 542b9ecdbd
tsdb: reduce sleep time when reading WAL (#10859)
The code sleeps for a short time to allow goroutines to finish, however
it seems the duration can be reduced a lot, speeding up the reading
process.

I checked using some WAL data from production, and the queue is almost
always empty at the time we enter `waitForIdle()` so there is no danger
of spinning in the tight loop.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-12 11:54:11 +05:30
beorn7 40ad5e284a Merge branch 'main' into beorn7/sparsehistogram 2022-06-09 20:50:30 +02:00
Bryan Boreham 9f79a6f4b5
tsdb: faster CRC check by avoiding allocations (#10789)
Instead of creating a new hashing object every time, call `crc32.Checksum`
which computes the answer without allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-06-08 08:00:59 +05:30
Matej Gera 1dd247f68b
Remote Write: Rename confusing walDir parameter to dir (#10464)
* Rename walDir parameter to dir

Signed-off-by: Matej Gera <matejgera@gmail.com>

* Improve NewQueueManager comment

Signed-off-by: Matej Gera <matejgera@gmail.com>
2022-05-30 21:45:30 -07:00
David Leadbeater 57f4aab27d
Update godoc links and remove note about TSDB versioning (#10754)
Signed-off-by: David Leadbeater <dgl@dgl.cx>
2022-05-26 18:34:43 +10:00
maizige 10b677b826
fix typo (#10696)
Update doc comment

Signed-off-by: gemaizi <864321211@qq.com>
2022-05-25 18:01:45 +02:00
Filip Petkovski d3cb39044e
Fix typo in symbol table size exceeded error message (#10746)
This commit fixes a typo when reporting an error that the the symbols
table size has been exceeded.

Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
2022-05-25 10:40:36 +02:00
Julien Pivotto 6e3a0efe40
Make necessary change to compile promql parser to wasm (#10683)
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2022-05-12 09:12:05 +02:00
Matthias Rampke 78f2645787
test(tsdb): break up repeated test to avoid timeout (#10671)
On macOS, the TestTombstoneCleanRetentionLimitsRace performs very
poorly. It takes more than a second to write out one block, and as it
writes 400 of them, we run into the 10-minute test timeout frequently.

While this doesn't fix the actual performance issue, breaking each
iteration into a subtest makes the test pass reliably (because each
iteration comfortably finishes in under a minute).

Related report: https://groups.google.com/g/prometheus-developers/c/jxQ6Ayg6VJ4/m/03H_DS9PDAAJ

Signed-off-by: Matthias Rampke <matthias@prometheus.io>
2022-05-09 00:39:26 +02:00
Łukasz Mierzwa 88f9b248b4
Correctly format error message (#10669)
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-05-06 00:42:31 +02:00
Bryan Boreham 4b9f248e85
unit tests: make all Labels sorted alphabetically (#10532)
"Labels is a sorted set of labels. Order has to be guaranteed upon
instantiation." says the comment, so fix all the tests that break this
rule.

For `BenchmarkLabelValuesWithMatchers()` and
`BenchmarkHeadLabelValuesWithMatchers()` the amount of work done changes
significantly if you put the labels in order, because all series refs
get neatly partitioned by the `tens` label, so I renamed the labels
to maintain the previous behaviour.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-05-04 23:41:36 +02:00
beorn7 654c07783c Fix deprecation
Signed-off-by: beorn7 <beorn@grafana.com>
2022-05-04 13:43:23 +02:00
beorn7 3bc711e333 Merge branch 'main' into sparsehistogram 2022-05-04 13:37:13 +02:00
Matthieu MOREL e2ede285a2
refactor: move from io/ioutil to io and os packages (#10528)
* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir

Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
2022-04-27 11:24:36 +02:00
Oleg Zaytsev af0f6da5cb
Fix chunk overflow appending samples at a variable rate (#10607)
* Add a test with variable samples rate append

This test overflows the chunk created in memseries, and the total amount
of samples in the (only) mmapped chunk is 29, instead of the 65565
appended ones.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Cut new chunk when rate prediction was wrong

When appending samples at a slow rate, and then appending at a higher
rate, the prediction we made to cut a new chunk is no longer valid.
Sometimes this can even cause an overflow in the chunk, if more samples
than uint16 can hold are appended.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Improve comment on 2*samplesPerChunk

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Assert that all chunks have less than 240 samples

Also, trigger new chunk at 240, not at more than 240

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2022-04-20 14:54:20 +02:00