Commit graph

7 commits

Author SHA1 Message Date
Ganesh Vernekar 7cf09b0395
Moving tsdb into its own subdirectory
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 13:58:49 +05:30
Brian Brazil 89a90fe96c
Simplify mergedPostings.Seek (#595)
The current implementation leads to very slow behaviour when there's
many lists, this no worse than n log k, where k is the number of posting
lists.

Adjust benchmark to catch this.

Remove flattening of without lists, not needed anymore.

Benchmark versus 0.4.0 (used in Prometheus 2.7):
```
benchmark                                                            old ns/op      new ns/op      delta
BenchmarkHeadPostingForMatchers/n="1"-8                              189907976      188863880      -0.55%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      113950106      110791414      -2.77%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      104965646      102388760      -2.45%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     138743592      104424250      -24.74%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            5279594954     5206096267     -1.39%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            8004610589     6184527719     -22.74%
BenchmarkHeadPostingForMatchers/i=~""-8                              2476042646     1003920432     -59.45%
BenchmarkHeadPostingForMatchers/i!=""-8                              7178244655     6059725323     -15.58%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              199342649      166642946      -16.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       215774683      167515095      -22.37%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        2214714769     392943663      -82.26%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                2148727410     322289262      -85.00%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              2170658009     338458171      -84.41%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             235720135      70597905       -70.05%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       2190570590     343034307      -84.34%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     2373784439     387297908      -83.68%

benchmark                                                            old allocs     new allocs     delta
BenchmarkHeadPostingForMatchers/n="1"-8                              33             33             +0.00%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      33             33             +0.00%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      33             33             +0.00%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     41             39             -4.88%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            56             56             +0.00%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            251577         100115         -60.21%
BenchmarkHeadPostingForMatchers/i=~""-8                              251123         100077         -60.15%
BenchmarkHeadPostingForMatchers/i!=""-8                              251525         100112         -60.20%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              42             39             -7.14%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       52             42             -19.23%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        251069         100101         -60.13%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                251473         100101         -60.19%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              250914         100102         -60.11%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             30038          11181          -62.78%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       250813         100105         -60.09%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     281503         111260         -60.48%

benchmark                                                            old bytes     new bytes     delta
BenchmarkHeadPostingForMatchers/n="1"-8                              10887600      10887600      +0.00%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      5456416       5456416       +0.00%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      5456416       5456416       +0.00%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     5456640       5456544       -0.00%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            258254504     258254472     -0.00%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            520126192     281554792     -45.87%
BenchmarkHeadPostingForMatchers/i=~""-8                              263446640     24908456      -90.55%
BenchmarkHeadPostingForMatchers/i!=""-8                              520121144     281553664     -45.87%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              7062448       7062272       -0.00%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       7063473       7062384       -0.02%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        274325656     35793776      -86.95%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                268926824     30362624      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              268882992     30363000      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             33193401      4269304       -87.14%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       268875024     30363096      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     300589656     33099784      -88.99%
```

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-05-13 10:51:07 +01:00
Brian Brazil 259847a6b1
Be smarter in how we look at matchers. (#572)
* Add unittests for PostingsForMatcher.

* Selector methods are all stateless, don't need a reference.

* Be smarter in how we look at matchers.

Look at all matchers to see if a label can be empty.

Optimise Not handling, so i!="2" is a simple lookup
rather than an inverse postings list.

All all the Withouts together, rather than
having to subtract each from all postings.

Change the pre-expand the postings logic to always do it before doing a
Without only. Don't do that if it's already a list.

The initial goal here was that the oft-seen pattern
i=~"something.+",i!="foo",i!="bar" becomes more efficient.

benchmark                                                            old ns/op     new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"-4                              5888          6160          +4.62%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-4                      7190          6640          -7.65%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-4                      6038          5923          -1.90%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-4                     6030884       4850525       -19.57%
BenchmarkHeadPostingForMatchers/i=~".*"-4                            887377940     230329137     -74.04%
BenchmarkHeadPostingForMatchers/i=~".+"-4                            490316101     319931758     -34.75%
BenchmarkHeadPostingForMatchers/i=~""-4                              594961991     130279313     -78.10%
BenchmarkHeadPostingForMatchers/i!=""-4                              537542388     318751015     -40.70%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-4              10460243      8565195       -18.12%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-4       44964267      8561546       -80.96%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-4                42244885      29137737      -31.03%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-4              35285834      32774584      -7.12%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-4             8951047       8379024       -6.39%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-4       63813335      30672688      -51.93%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-4     45381112      44924397      -1.01%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-04-09 11:59:45 +01:00
Brian Brazil 62b652fbd0
Improve Merge performance (#531)
Use a heap for Next for merges, and
pre-compute if there's many postings on the
unset path.

Add posting lookup benchmarks

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-02-28 17:23:55 +00:00
Brian Brazil b2d7bbd6b1
Move series fetches out of inner loop of SortedPostings. (#485)
With 1M series:

Before:
BenchmarkHeadPostingForMatchers-8              1        3501996117 ns/op 61311520 B/op         78 allocs/op

After:
BenchmarkHeadPostingForMatchers-8              1        1403072952 ns/op 69261568 B/op         72 allocs/op

This works out as 3X faster, as the above time includes other things.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-01-03 10:35:10 +00:00
Ganesh Vernekar a95323c021 Add license headers to missing files (#447)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2018-11-06 20:19:42 +02:00
Thomas Jackson b4132df5f7 Reduce allocations for queries on HEAD (#417)
Some benchmarks for HEAD and allocate the correct slice size in LabelValues , we already know what it'll be

This is ~15% time improvement, and ~25% allocation improvement:


```
benchmark                             old ns/op     new ns/op     delta
BenchmarkHeadPostingForMatchers-4     74452         63514         -14.69%

benchmark                             old allocs     new allocs     delta
BenchmarkHeadPostingForMatchers-4     20             13             -35.00%

benchmark                             old bytes     new bytes     delta
BenchmarkHeadPostingForMatchers-4     5425          3137          -42.18%
```

Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
2018-10-22 13:52:01 +03:00