Commit graph

64 commits

Author SHA1 Message Date
Ganesh Vernekar c806262206
Fix 'chunks.HeadReadWriter: maxt of the files are not set' error (#7856)
* Fix chunks.HeadReadWriter: maxt of the files are not set

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-08-26 19:59:18 +02:00
Yukun Sun cfd4e05c9e
fix: return a corruption error when iterator function find a chunk that is out of sequence (#7855)
Signed-off-by: sunyukun <sunyukun@didiglobal.com>

Co-authored-by: sunyukun <sunyukun@didiglobal.com>
2020-08-26 20:36:27 +05:30
Zhou Hao 40ace418d1
fix misspell (#7764)
Signed-off-by: Zhou Hao <zhouhao@cn.fujitsu.com>
2020-08-07 08:57:25 +01:00
johncming ac677ed8b3
promql: delete redundant return value. (#7721)
Signed-off-by: johncming <johncming@yahoo.com>
2020-08-03 10:45:53 +01:00
Bartlomiej Plotka e6d7cc5fa4
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069)
* tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating.

Chained to https://github.com/prometheus/prometheus/pull/7059

* NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it.
* Added single SeriesEntry / ChunkEntry for all series implementations.
* Unified all vertical, and non vertical for compact and querying to single
merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before)
* Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples.
* Refactored endpoint tests and querier tests to include subtests.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed comments from Brian and Beorn.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed snapshot test and added chunk iterator support for DBReadOnly.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed race when iterating over Ats first.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed tests.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed populate block tests.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed endpoints test.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed test.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Added test & fixed case of head open chunk.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed DBReadOnly tests and bug producing 1 sample chunks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Added cases for partial block overlap for multiple full chunks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Added extra tests for chunk meta after compaction.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed small vertical merge bug and added more tests for that.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 16:03:02 +01:00
Annanay 9bba8a6eae Merge branch 'master' into appender-context
Signed-off-by: Annanay <annanayagarwal@gmail.com>
2020-07-30 16:43:18 +05:30
Annanay 89129cd39a Address comments
Signed-off-by: Annanay <annanayagarwal@gmail.com>
2020-07-30 16:41:13 +05:30
Javier Palomo Almena 348ff4285f
tsdb: Replace sync/atomic with uber-go/atomic in tsdb (#7659)
* tsdb/chunks: Replace sync/atomic with uber-go/atomic

Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>

* tsdb/heaad: Replace sync/atomic with uber-go/atomic

Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>

* vendor: Make go.uber.org/atomic a direct dependency

There is no modifications to go.sum and vendor/ because
it was already vendored.

Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>

* tsdb: Remove comments referring to the sync/atomic alignment bug

Related: https://golang.org/pkg/sync/atomic/#pkg-note-BUG

Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
2020-07-28 10:12:42 +05:30
Julien Pivotto ffc925dd21
TSDB: Error when we commit/rollback twice (#7593)
* TSDB: Error when we commit/rollback twice

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-07-22 11:57:38 +02:00
Krasimir Georgiev ccab2b30c9
Test no panic after a WAL corruption (#7625)
* no panic the head memseries has chunks in it

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

* fix a panic when querying after a wal corruption.

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

* review nits

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

* Add test for reading the data after a wal corruption.

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

Update tsdb/db_test.go

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

Update tsdb/db_test.go

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

* spellings

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2020-07-21 12:32:13 +05:30
Krasi Georgiev d30492cbb0 Avoid panic when the headChunk is nil during isolation.
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
2020-07-20 18:23:18 +03:00
Ganesh Vernekar 1760c7474c
Replay m-map chunks irrespective of WAL (#7589)
* Replay m-map chunks irrespective of WAL

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* More logs

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-16 18:34:08 +05:30
Ganesh Vernekar ea013343ca
Log when starting to create a checkpoint (#7581)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-15 19:15:37 +05:30
Bartlomiej Plotka 823b218e1b
Fixed race between compact (gc, populate) and head append causing unknown symbol error. (#7560)
* Fixed race between compact (gc, populate) and head append causing unknown symbol error.

Fixes https://github.com/prometheus/prometheus/issues/7373

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-14 09:36:22 +01:00
Bartlomiej Plotka 492061b24c
Revert "Fix unknown symbol error during head compaction (#7526)" (#7556)
This reverts commit 30505a202a.
2020-07-11 22:37:16 +05:30
Ganesh Vernekar 30505a202a
Fix unknown symbol error during head compaction (#7526)
* Fix race during head compaction

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Comment out the test

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Skip test instead of commenting it out

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-07 17:29:09 +05:30
Harkishen Singh f32307b656
Increments WAL corruption metric on WAL corruption during checkpointing (#7491)
* Increments wal corruption metric on error during checkpointing

Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>

* check for wal corruption error

Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
2020-07-05 11:25:42 +05:30
Ganesh Vernekar 082c17b691
Introduce SortedLabelValues/LabelValues to speedup queries for high cardinality (#7448)
* Introduce LabelValuesUnsorted to speedup queries for high cardinality

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add sort check

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-06-25 14:10:29 +01:00
Peter Štibraný ff80690a6e
Optimise lowWatermark in Isolation (#7332)
* Track open appenders in doubly-linked list to make lowWatermark O(1).
* Use RW locks.
* Added BenchmarkIsolationWithState.

Signed-off-by: Peter Štibraný <peter.stibrany@grafana.com>
2020-06-03 20:09:05 +02:00
Jess G fdc49fae5b
Added time range parameters to labelNames API (#7288)
* add time range params to labelNames api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* evaluate min/max time range when reading labels from the head

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelValues api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test, add docs

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add a test for head min max range

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test to match comment

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* address CR comments

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* combine vars only used once

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelNames api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* evaluate min/max time range when reading labels from the head

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelValues api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test, add docs

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add a test for head min max range

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test to match comment

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* address CR comments

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* combine vars only used once

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* restart ci

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* use range expectedLabelNames instead of range actualLabelNames in test

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 13:50:09 +01:00
Krasimir Georgiev 09df8d94e0
More explicit chunks and head error handling. (#7277) 2020-05-22 12:03:23 +03:00
Ganesh Vernekar 1c99adb9fd
Callbacks for lifecycle of series in TSDB (#7159)
* Callbacks for lifecycle of series in TSDB

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add more comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-20 18:52:08 +05:30
Ganesh Vernekar d4b9fe801f
M-map full chunks of Head from disk (#6679)
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory

Prom startup now happens in these stages
 - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.

If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.

[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md)  - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.

**Prombench results**

_WAL Replay_

1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m

_Memory During WAL Replay_

High Churn:
10-15% less RAM -  32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM -  23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb

Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)


Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-06 21:00:00 +05:30
Ben Ye 1e4e37144d
Fixed wrongly handled not ready TSDB on web and API. (#7182)
* fix federate endpoint panic

Signed-off-by: yeya24 <yb532204897@gmail.com>

* Fixed all cases of not ready TSDB being wrongly handled.

* Fixed issue for federation.
* Ensured this will never happen again thanks to interfaces
* Fixes same issue for stats.
* Added tests for readiness.
* Fixed bug in stats. It was:
   status.MaxTime = db.Head().MaxTime()
   status.MinTime = db.Head().MaxTime()


Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-04-29 17:16:14 +01:00
Julien Pivotto fc3fb3265a
Merge pull request #7145 from prometheus/release-2.17
Backport release 2.17 into master
2020-04-20 14:08:12 +02:00
Julien Pivotto ed1852ab95
TSDB: Isolation: avoid creating appenderId's without appender (#7135)
Prior to this commit we could have situations where we are creating an
appenderId but never creating an appender to go with it, therefore
blocking the low watermak.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-04-17 20:51:03 +02:00
Marek Slabicki 8224ddec23
Capitalizing first letter of all log lines (#7043)
Signed-off-by: Marek Slabicki <thaniri@gmail.com>
2020-04-11 09:22:18 +01:00
Brian Brazil cd73b3d33e
Reduce how much old WAL we keep around. (#7098)
Previously we were keeping up to around 6 hours of WAL around by
removing 1/3 every hours. This was excessive, so switch to removing 2/3
which will up to around 3 hours of WAL around.

This will roughly halve the size of the WAL and halve startup time for
those who are I/O bound. This may increase the checkpoint size for
those with certain churn patterns, but by much less than we're saving
from the segments.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-04-07 15:55:57 +05:30
Julien Pivotto ceef10cee4 Reset comment
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-26 00:17:56 +01:00
Julien Pivotto 653f343547 Revert head posting optimization
This reverts commit 52630ad0c7.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-25 20:19:33 +01:00
Julien Pivotto 8907ba6235 Make TSDB use storage errors
This fixes #6992, which was introduced by #6777. There was an
intermediate component which translated TSDB errors into storage errors,
but that component was deleted and this bug went unnoticed, until we
were watching at the Prombench results. Without this, scrape will fail
instead of dropping samples or using "Add" when the series have been
garbage collected.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-17 22:24:25 +01:00
Julien Pivotto 0f9e78bd88
tsdb: fix races around head chunks (#6985)
* tsdb: fix races around head chunks

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-16 13:59:22 +01:00
Björn Rabenstein d80b0810c1
Move crucial actions to defer (#6918)
With defer having less of a performance penalty, there is no reason
not to do those crucial operations via defer.

Context: With isolation in place, if we forget to Commit/Rollback, the
low watermark will get stuck forever.

The current code should not have any bugs, but moving to defer helps
to avoid future bugs.

This is also moving the `closeAppend` in the `Commit` implementation
itself to defer. If logging to the WAL fails, we would have missed the
`closeAppend`.

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-13 20:54:47 +01:00
beorn7 f6f4fd6556 tsdb: Do a full rollback upon commit error
I think the previous behavior is problematic as it will leave
`memSeries` around that still have `pendingCommit` set to `true`.

The only case where this can happen in this code path is a failure to
write to the WAL, in which case we are probably in trouble anyway. I
believe, however, we should still try to do the right thing and do the
full rollback. This will implicitly try to write to the WAL again, but
this time without samples, which may even succeed. (But we propagate
the previous error in any case.)

This also adds `a.head.putSeriesBuffer(a.sampleSeries)` to Rollback,
which was previously missing.

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-10 14:54:41 +01:00
beorn7 0193b746b1 Defer call to iso.closeAppend
This is taken from #6918. Since we probably won't merge #6918 before
the relase, we have to do this bit of it as it fixes an actual bug
(iso.closeAppend is not called if the append fails because of an error
logging to the WAL).

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-04 23:33:30 +01:00
Ganesh Vernekar 2cc32c332d Log WAL replay duration
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-03-03 18:00:04 +01:00
Julien Pivotto ed623f69e2
tsdb: don't allow ingesting empty labelsets (#6891)
* tsdb: don't allow ingesting empty labelsets

When we ingest an empty labelset in the head, further blocks can not be
compacted, with the error:

```
level=error ts=2020-02-27T21:26:58.379Z caller=db.go:659 component=tsdb
msg="compaction failed" err="persist head block: write compaction:
add series: out-of-order series added with label set \"{}\" / prev:
\"{}\""
```

We should therefore reject those invalid empty labelsets upfront.

This can be reproduced with the following:

```
cat << END > prometheus.yml
scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 1s
    basic_auth:
      username: test
      password: test
    metric_relabel_configs:
    - regex: ".*"
      action: labeldrop

    static_configs:
    - targets:
      - 127.0.1.1:9090
END
./prometheus --storage.tsdb.min-block-duration=1m
```
And wait a few minutes.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-02 07:18:05 +00:00
beorn7 d9af11e636 Do not attempt isolation for appendID == 0
Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-01 02:48:35 +01:00
beorn7 7f30b0984d Implement isolation
This has been ported from https://github.com/prometheus/tsdb/pull/306.

Original implementation by @brian-brazil, explained in detail in the
2nd half of this talk:
https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/

The implementation was then processed by @gouthamve into the PR linked
above. Relevant slide deck:
https://docs.google.com/presentation/d/1-ICg7PEmDHYcITykD2SR2xwg56Tzf4gr8zfz1OerY5Y/edit?usp=drivesdk

Signed-off-by: beorn7 <beorn@grafana.com>
Co-authored-by: Brian Brazil <brian.brazil@robustperception.io>
Co-authored-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2020-02-28 14:18:39 +01:00
beorn7 6b8181370f Fix punctuation nits
Signed-off-by: beorn7 <beorn@grafana.com>
2020-02-28 14:17:33 +01:00
Julien Pivotto 52630ad0c7 Make head Postings only return series in time range
benchmark                                                old ns/op      new ns/op      delta
BenchmarkQuerierSelect/Head/1of1000000-8                 405805161      120436132      -70.32%
BenchmarkQuerierSelect/Head/10of1000000-8                403079620      120624292      -70.07%
BenchmarkQuerierSelect/Head/100of1000000-8               404678647      120923522      -70.12%
BenchmarkQuerierSelect/Head/1000of1000000-8              403145813      118636563      -70.57%
BenchmarkQuerierSelect/Head/10000of1000000-8             405020046      125716206      -68.96%
BenchmarkQuerierSelect/Head/100000of1000000-8            426305002      175808499      -58.76%
BenchmarkQuerierSelect/Head/1000000of1000000-8           619002108      567013003      -8.40%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           1276316086     120281094      -90.58%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          1282631170     121836526      -90.50%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         1325824787     121174967      -90.86%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        1271386268     121025117      -90.48%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       1280223345     130838948      -89.78%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      1271401620     243635515      -80.84%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     1360256090     1307744674     -3.86%
BenchmarkQuerierSelect/Block/1of1000000-8                748183120      707888498      -5.39%
BenchmarkQuerierSelect/Block/10of1000000-8               741084129      716317249      -3.34%
BenchmarkQuerierSelect/Block/100of1000000-8              722157273      735624256      +1.86%
BenchmarkQuerierSelect/Block/1000of1000000-8             727587744      731981838      +0.60%
BenchmarkQuerierSelect/Block/10000of1000000-8            727518578      726860308      -0.09%
BenchmarkQuerierSelect/Block/100000of1000000-8           765577046      757382386      -1.07%
BenchmarkQuerierSelect/Block/1000000of1000000-8          1126722881     1084779083     -3.72%

benchmark                                                old allocs     new allocs     delta
BenchmarkQuerierSelect/Head/1of1000000-8                 4000018        24             -100.00%
BenchmarkQuerierSelect/Head/10of1000000-8                4000036        82             -100.00%
BenchmarkQuerierSelect/Head/100of1000000-8               4000216        625            -99.98%
BenchmarkQuerierSelect/Head/1000of1000000-8              4002016        6028           -99.85%
BenchmarkQuerierSelect/Head/10000of1000000-8             4020016        60037          -98.51%
BenchmarkQuerierSelect/Head/100000of1000000-8            4200016        600047         -85.71%
BenchmarkQuerierSelect/Head/1000000of1000000-8           6000016        6000016        +0.00%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           4000055        28             -100.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          4000073        87             -100.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         4000253        630            -99.98%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        4002053        6036           -99.85%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       4020053        60054          -98.51%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      4200053        600074         -85.71%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     6000053        6000053        +0.00%
BenchmarkQuerierSelect/Block/1of1000000-8                6000021        6000021        +0.00%
BenchmarkQuerierSelect/Block/10of1000000-8               6000057        6000057        +0.00%
BenchmarkQuerierSelect/Block/100of1000000-8              6000417        6000417        +0.00%
BenchmarkQuerierSelect/Block/1000of1000000-8             6004017        6004017        +0.00%
BenchmarkQuerierSelect/Block/10000of1000000-8            6040017        6040017        +0.00%
BenchmarkQuerierSelect/Block/100000of1000000-8           6400017        6400017        +0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-8          10000018       10000018       +0.00%

benchmark                                                old bytes     new bytes     delta
BenchmarkQuerierSelect/Head/1of1000000-8                 176001177     1392          -100.00%
BenchmarkQuerierSelect/Head/10of1000000-8                176002329     4368          -100.00%
BenchmarkQuerierSelect/Head/100of1000000-8               176013849     33520         -99.98%
BenchmarkQuerierSelect/Head/1000of1000000-8              176129056     321456        -99.82%
BenchmarkQuerierSelect/Head/10000of1000000-8             177281049     3427376       -98.07%
BenchmarkQuerierSelect/Head/100000of1000000-8            188801049     35055408      -81.43%
BenchmarkQuerierSelect/Head/1000000of1000000-8           304001059     304001049     -0.00%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           229192188     2488          -100.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          229193340     5568          -100.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         229204860     35536         -99.98%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        229320060     345104        -99.85%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       230472060     3894672       -98.31%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      241992060     40511632      -83.26%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     357192060     357192060     +0.00%
BenchmarkQuerierSelect/Block/1of1000000-8                227201516     227201506     -0.00%
BenchmarkQuerierSelect/Block/10of1000000-8               227203057     227203041     -0.00%
BenchmarkQuerierSelect/Block/100of1000000-8              227217161     227217165     +0.00%
BenchmarkQuerierSelect/Block/1000of1000000-8             227358279     227358289     +0.00%
BenchmarkQuerierSelect/Block/10000of1000000-8            228769485     228769475     -0.00%
BenchmarkQuerierSelect/Block/100000of1000000-8           242881487     242881477     -0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-8          384001705     384001705     +0.00%

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-02-20 22:41:46 +01:00
Brian Brazil cebe36c7d5 Make head Postings only return series in time range
Series() will fetch all the metadata for a series,
even if it's going to be filtered later due to time ranges.

For 1M series we save ~1.1s if you only needed some of the data, but take an
extra ~.2s if you did want everything.

benchmark                                  old ns/op      new ns/op      delta
BenchmarkHeadSeries/1of1000000-4           1443715987     131553480      -90.89%
BenchmarkHeadSeries/10of1000000-4          1433394040     130730596      -90.88%
BenchmarkHeadSeries/100of1000000-4         1437444672     131360813      -90.86%
BenchmarkHeadSeries/1000of1000000-4        1438958659     132573137      -90.79%
BenchmarkHeadSeries/10000of1000000-4       1438061766     145742377      -89.87%
BenchmarkHeadSeries/100000of1000000-4      1455060948     281659416      -80.64%
BenchmarkHeadSeries/1000000of1000000-4     1633524504     1803550153     +10.41%

benchmark                                  old allocs     new allocs     delta
BenchmarkHeadSeries/1of1000000-4           4000055        28             -100.00%
BenchmarkHeadSeries/10of1000000-4          4000073        87             -100.00%
BenchmarkHeadSeries/100of1000000-4         4000253        630            -99.98%
BenchmarkHeadSeries/1000of1000000-4        4002053        6036           -99.85%
BenchmarkHeadSeries/10000of1000000-4       4020053        60054          -98.51%
BenchmarkHeadSeries/100000of1000000-4      4200053        600074         -85.71%
BenchmarkHeadSeries/1000000of1000000-4     6000053        6000094        +0.00%

benchmark                                  old bytes     new bytes     delta
BenchmarkHeadSeries/1of1000000-4           229192184     2488          -100.00%
BenchmarkHeadSeries/10of1000000-4          229193336     5568          -100.00%
BenchmarkHeadSeries/100of1000000-4         229204856     35536         -99.98%
BenchmarkHeadSeries/1000of1000000-4        229320056     345104        -99.85%
BenchmarkHeadSeries/10000of1000000-4       230472056     3894673       -98.31%
BenchmarkHeadSeries/100000of1000000-4      241992056     40511632      -83.26%
BenchmarkHeadSeries/1000000of1000000-4     357192056     402380440     +12.65%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-02-20 22:18:42 +01:00
Bartlomiej Plotka 34426766d8 Unify Iterator interfaces. All point to storage now.
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.

* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.

No logic was changed, only different source of abstractions, so no need for benchmarks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:54 +00:00
Ganesh Vernekar 6f1d2ec73e
Break DB.Compact and DB.compactHead and DB.compactBlocks. Add DB.CompactHead.
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-02-14 20:33:26 +05:30
Marco Pracucci 0703dae7cc
Compact TSDB head chunks after being cut, to reduce inuse memory
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-02-05 13:00:39 +01:00
Thor 17d8c49919
made stripe size configurable (#6644)
Signed-off-by: Thor <thansen@digitalocean.com>
2020-01-30 12:42:43 +05:30
Ganesh Vernekar 21a5cf5d1d
Bring back tombstones to Head block (#6542)
* Bring back tombstones to Head block

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add test cases

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Cleanup

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-01-20 21:08:00 +05:30
Julien Pivotto 46d18112a3 tsdb: error on series with duplicate labels (#6664)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-20 11:05:27 +00:00
Brian Brazil 4708915ac6 Replace StringTuples with []string
Benchmarks show slight cpu/allocs improvements.

benchmark                                                               old ns/op      new ns/op      delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               269978625      235305110      -12.84%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       129739974      121646193      -6.24%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       123826274      122056253      -1.43%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      126962188      130038235      +2.42%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             6423653989     5991126455     -6.73%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             6934647521     7033370634     +1.42%
BenchmarkPostingsForMatchers/Head/i=~""-4                               1177781285     1121497736     -4.78%
BenchmarkPostingsForMatchers/Head/i!=""-4                               7033680256     7246094991     +3.02%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               293702332      287440212      -2.13%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        307628268      307039964      -0.19%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         512247746      480003862      -6.29%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 361199794      367066917      +1.62%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               478863761      476037784      -0.59%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              103394659      102902098      -0.48%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        482552781      475453903      -1.47%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      559257389      589297047      +5.37%
BenchmarkPostingsForMatchers/Block/n="1"-4                              36492          37012          +1.42%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      557788         611903         +9.70%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      554443         573814         +3.49%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     553227         553826         +0.11%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113855090      111707221      -1.89%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            133994674      136520728      +1.89%
BenchmarkPostingsForMatchers/Block/i=~""-4                              38138091       36299898       -4.82%
BenchmarkPostingsForMatchers/Block/i!=""-4                              28861213       27396723       -5.07%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112699941      110853868      -1.64%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       113198026      111389742      -1.60%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        28994069       27363804       -5.62%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                29709406       28589223       -3.77%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              134695119      135736971      +0.77%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             26783286       25826928       -3.57%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       134733254      134116739      -0.46%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     160713937      158802768      -1.19%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               36             36             +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       38             38             +0.00%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       38             38             +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      42             40             -4.76%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             61             59             -3.28%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             100088         100087         -0.00%
BenchmarkPostingsForMatchers/Head/i=~""-4                               100053         100051         -0.00%
BenchmarkPostingsForMatchers/Head/i!=""-4                               100087         100085         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               44             42             -4.55%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        50             48             -4.00%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         100076         100074         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 100077         100075         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               100077         100074         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              11167          11165          -0.02%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        100082         100080         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      111265         111261         -0.00%
BenchmarkPostingsForMatchers/Block/n="1"-4                              6              6              +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     15             13             -13.33%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            12             10             -16.67%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            100040         100038         -0.00%
BenchmarkPostingsForMatchers/Block/i=~""-4                              100045         100043         -0.00%
BenchmarkPostingsForMatchers/Block/i!=""-4                              100041         100039         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              17             15             -11.76%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       23             21             -8.70%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        100046         100044         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                100050         100048         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              100049         100047         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             11150          11148          -0.02%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       100055         100053         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     111238         111234         -0.00%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               10887816      10887817      +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       5456648       5456648       +0.00%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       5456648       5456648       +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      5456792       5456712       -0.00%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             258254408     258254328     -0.00%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             273912888     273912904     +0.00%
BenchmarkPostingsForMatchers/Head/i=~""-4                               17266680      17266600      -0.00%
BenchmarkPostingsForMatchers/Head/i!=""-4                               273912416     273912336     -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               7062578       7062498       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        7062770       7062690       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         28152346      28152266      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 22721178      22721098      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               22721336      22721224      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              3623804       3623733       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        22721480      22721400      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      24816652      24816444      -0.00%
BenchmarkPostingsForMatchers/Block/n="1"-4                              296           296           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     1544          1464          -5.18%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1606114       1606045       -0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            17264709      17264629      -0.00%
BenchmarkPostingsForMatchers/Block/i=~""-4                              17264780      17264696      -0.00%
BenchmarkPostingsForMatchers/Block/i!=""-4                              17264680      17264600      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1606253       1606165       -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1606445       1606348       -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        17264808      17264728      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                17264936      17264856      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              17264965      17264885      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3148262       3148182       -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       17265141      17265061      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     20416944      20416784      -0.00%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-07 12:20:03 +00:00
Brian Brazil e9ede51753 Remove last vestiges of never used composite index code.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-07 12:20:03 +00:00