Commit graph

67 commits

Author SHA1 Message Date
Kemal Akkoyun 66dfb951c4
*: Consistent Error/Warning handling for SeriesSet iterator: Allowing Async Select (#7251)
* Add errors and Warnings to SeriesSet

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Change Querier interface and refactor accordingly

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactor promql/engine to propagate warnings at eval stage

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Make sure all the series from all Selects are pre-advanced

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Separate merge series sets

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Clean

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactor merge querier failure handling

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.

* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fix formatting

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix CI issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Added final tests for error handling.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.

NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Utilize errWithWarnings

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix recently introduced expansion issue

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Add tests for secondary querier error handling

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Implement lazy merge

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Add name to test cases

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Reorganize

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review comments

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review comments

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Remove redundant warnings

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix rebase mistake

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-06-09 17:57:31 +01:00
Ganesh Vernekar 1c99adb9fd
Callbacks for lifecycle of series in TSDB (#7159)
* Callbacks for lifecycle of series in TSDB

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add more comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-20 18:52:08 +05:30
Ganesh Vernekar d4b9fe801f
M-map full chunks of Head from disk (#6679)
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory

Prom startup now happens in these stages
 - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.

If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.

[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md)  - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.

**Prombench results**

_WAL Replay_

1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m

_Memory During WAL Replay_

High Churn:
10-15% less RAM -  32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM -  23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb

Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)


Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-06 21:00:00 +05:30
ZouYu 2b7437d60e
Fix some warnings: 'redundant type from array, slice, or map composite literal' (#7109)
Signed-off-by: ZouYu <zouy.fnst@cn.fujitsu.com>
2020-04-15 11:17:41 +01:00
Bartlomiej Plotka fe802f29c9 storage: Removed SelectSorted method; Simplified interface; Added requirement for remote read to sort response.
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.

I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.

Also I don't know what TestSelectSorted was testing, but now it's testing sorting.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-13 13:06:25 +00:00
Bartlomiej Plotka 34426766d8 Unify Iterator interfaces. All point to storage now.
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.

* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.

No logic was changed, only different source of abstractions, so no need for benchmarks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:54 +00:00
Thor 17d8c49919
made stripe size configurable (#6644)
Signed-off-by: Thor <thansen@digitalocean.com>
2020-01-30 12:42:43 +05:30
Brian Brazil 000e306f2f
Handle V1 indexes, some of which have unsorted posting offset tables. (#6564)
Fixes #6535

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-06 14:06:11 +00:00
Brian Brazil 536d416299
Fix tsdb code and tests to work on Windows. (#6547)
Add back Windows CI, we lost it when tsdb was merged into the prometheus
repo. There's many tests failing outside tsdb, so only test tsdb for
now.

Fixes #6513

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-04 14:55:02 +00:00
Brian Brazil 767fa704b6 Load only some offsets into the symbol table into memory.
Rather than keeping the entire symbol table in memory, keep every nth
offset and walk from there to the entry we need. This ends up slightly
slower, ~360ms per 1M series returned from PostingsForMatchers which is
not much considering the rest of the CPU such a query would go on to
use.

Make LabelValues use the postings tables, rather than having
to do symbol lookups. Use yoloString, as PostingsForMatchers
doesn't need the strings to stick around and adjust the API
call to keep the Querier open until it's all marshalled.

Remove allocatedSymbols memory optimisation, we no longer keep all the
symbol strings in heap memory. Remove LabelValuesFor and LabelIndices,
they're dead code. Ensure we've still tests for label indices,
and add missing test that we can work with old V1 Format index files.

PostingForMatchers performance is slightly better, with a big drop in
allocation counts due to using yoloString for LabelValues:

benchmark                                                               old ns/op     new ns/op     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              36698         36681         -0.05%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      522786        560887        +7.29%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      511652        537680        +5.09%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     522102        564239        +8.07%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113689911     111795919     -1.67%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            135825572     132871085     -2.18%
BenchmarkPostingsForMatchers/Block/i=~""-4                              40782628      38038181      -6.73%
BenchmarkPostingsForMatchers/Block/i!=""-4                              31267869      29194327      -6.63%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112733329     111568823     -1.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       112868153     111232029     -1.45%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        31338257      29349446      -6.35%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                32054482      29972436      -6.50%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              136504654     133968442     -1.86%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             27960350      27264997      -2.49%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       136765564     133860724     -2.12%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     163714583     159453668     -2.60%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              6              6              +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     17             15             -11.76%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            100012         12             -99.99%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            200040         100040         -49.99%
BenchmarkPostingsForMatchers/Block/i=~""-4                              200045         100045         -49.99%
BenchmarkPostingsForMatchers/Block/i!=""-4                              200041         100041         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              100017         17             -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       100023         23             -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        200046         100046         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                200050         100050         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              200049         100049         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             111150         11150          -89.97%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       200055         100055         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     311238         111238         -64.26%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              296           296           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     552           1544          +179.71%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1600482       1606125       +0.35%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            17259065      17264709      +0.03%
BenchmarkPostingsForMatchers/Block/i=~""-4                              17259150      17264780      +0.03%
BenchmarkPostingsForMatchers/Block/i!=""-4                              17259048      17264680      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1600610       1606242       +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1600813       1606434       +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        17259176      17264808      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                17259304      17264936      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              17259333      17264965      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3142628       3148262       +0.18%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       17259509      17265141      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     20405680      20416944      +0.06%

However overall Select performance is down and involves more allocs, due to
having to do more than a simple map lookup to resolve a symbol and that all the strings
returned are allocated:

benchmark                                           old ns/op     new ns/op      delta
BenchmarkQuerierSelect/Block/1of1000000-4           506092636     862678244      +70.46%
BenchmarkQuerierSelect/Block/10of1000000-4          505638968     860917636      +70.26%
BenchmarkQuerierSelect/Block/100of1000000-4         505229450     882150048      +74.60%
BenchmarkQuerierSelect/Block/1000of1000000-4        515905414     862241115      +67.13%
BenchmarkQuerierSelect/Block/10000of1000000-4       516785354     874841110      +69.29%
BenchmarkQuerierSelect/Block/100000of1000000-4      540742808     907030187      +67.74%
BenchmarkQuerierSelect/Block/1000000of1000000-4     815224288     1181236903     +44.90%

benchmark                                           old allocs     new allocs     delta
BenchmarkQuerierSelect/Block/1of1000000-4           4000020        6000020        +50.00%
BenchmarkQuerierSelect/Block/10of1000000-4          4000038        6000038        +50.00%
BenchmarkQuerierSelect/Block/100of1000000-4         4000218        6000218        +50.00%
BenchmarkQuerierSelect/Block/1000of1000000-4        4002018        6002018        +49.97%
BenchmarkQuerierSelect/Block/10000of1000000-4       4020018        6020018        +49.75%
BenchmarkQuerierSelect/Block/100000of1000000-4      4200018        6200018        +47.62%
BenchmarkQuerierSelect/Block/1000000of1000000-4     6000018        8000019        +33.33%

benchmark                                           old bytes     new bytes     delta
BenchmarkQuerierSelect/Block/1of1000000-4           176001468     227201476     +29.09%
BenchmarkQuerierSelect/Block/10of1000000-4          176002620     227202628     +29.09%
BenchmarkQuerierSelect/Block/100of1000000-4         176014140     227214148     +29.09%
BenchmarkQuerierSelect/Block/1000of1000000-4        176129340     227329348     +29.07%
BenchmarkQuerierSelect/Block/10000of1000000-4       177281340     228481348     +28.88%
BenchmarkQuerierSelect/Block/100000of1000000-4      188801340     240001348     +27.12%
BenchmarkQuerierSelect/Block/1000000of1000000-4     304001340     355201616     +16.84%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 18:56:58 +00:00
Brian Brazil aff9f7a9e8 Extend PostingsForMatchers benchmark to cover Blocks too.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 19:59:31 +00:00
Krasimir Georgiev 549164f252
return err instead of panic for corrupted chunk (#6040)
* Fix tsdb panic when querying corrupted chunks.
check that the chunk segment has enough data to read all chunk pieces.
* refactor, simplify and add tests.
* simpfiy WriteChunks implementation

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
2019-12-04 09:37:49 +02:00
Tom Wilkie de0a772b8e Port tsdb to use pkg/labels. (#6326)
* Port tsdb to use pkg/labels.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Get tests passing.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Remove useless cast.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Appease linters.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-11-18 11:53:33 -08:00
Dipack P Panjabi ce7bab04dd Compute WAL size and use it during retention size checks (#5886)
* Compute WAL size and account for it when applying the retention settings.

Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
2019-11-12 09:40:16 +07:00
Bartek Plotka f0863a604e Removed extra tsdb/testutil after merge.
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
2019-08-14 10:12:32 +01:00
Ganesh Vernekar 5ecef3542d
Cleanup after merging tsdb into prometheus
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 14:04:14 +05:30
Ganesh Vernekar 7cf09b0395
Moving tsdb into its own subdirectory
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 13:58:49 +05:30
Renamed from block_test.go (Browse further)