Commit graph

391 commits

Author SHA1 Message Date
Peter Štibraný 57daf79192 More review feedback. 2021-09-28 10:29:54 +02:00
Peter Štibraný db7fa7621c Use <ix+1>_of_<shardCount> formatting for better readibility. 2021-09-28 10:13:48 +02:00
Peter Štibraný 861f9083d8 Fix directory cleanup in case of compaction failure. 2021-09-27 17:05:14 +02:00
Peter Štibraný ffd281ab9d Address feedback. 2021-09-27 16:33:43 +02:00
Peter Štibraný 58dab3de8a Source blocks are deletable only if they are ALL empty. 2021-09-27 16:24:46 +02:00
Peter Štibraný 336f4260db Removed commented code. 2021-09-27 14:34:04 +02:00
Peter Štibraný 006c2d7d55 All output blocks will have the same timestamp.
Minor updates.
2021-09-27 14:22:51 +02:00
Peter Štibraný 63dbb1c69a Make lint happy. 2021-09-27 12:49:08 +02:00
Peter Štibraný 78396b67dd Compactor with support for splitting input blocks into multiple output blocks. 2021-09-27 12:40:11 +02:00
Ganesh Vernekar 84f8307aa1
Merge remote-tracking branch 'upstream/main' into codesome/syncprom
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-09-17 15:58:45 +05:30
Darshan Chaudhary 1f688657bf
Call delete on head if interval overlaps (#9151)
* Call delete on head if interval overlaps

Signed-off-by: darshanime <deathbullet@gmail.com>

* Garbage collect tombstones during head gc

Signed-off-by: darshanime <deathbullet@gmail.com>

* Truncate tombstones before min time during head gc

Signed-off-by: darshanime <deathbullet@gmail.com>

* Lock less by deleting all keys in a single pass

Signed-off-by: darshanime <deathbullet@gmail.com>

* Pass map to DeleteTombstones

Signed-off-by: darshanime <deathbullet@gmail.com>

* Create new slice to replace old one

Signed-off-by: darshanime <deathbullet@gmail.com>
2021-09-16 12:20:03 +05:30
Marco Pracucci 8708b2d086
Update upstream Prometheus
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-09-14 10:28:19 +02:00
Ganesh Vernekar 30534e99d9
Take snapshot only after closing the WAL (#9328)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-09-13 18:30:41 +05:30
Ganesh Vernekar 8944520ccc
Fix deletion of old snapshots (#9314)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-09-08 19:53:44 +05:30
Bryan Boreham 2327236bb5
Decrement active_appenders metric when no samples added (#9230)
* Decrement active_appenders metric when no samples added

Also add a test that the metric is incremented and decremented as
expected with and without samples.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Fix comment

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-09-08 14:49:58 +05:30
Bryan Boreham 87d909df4a
Remove symbols map from TSDB head (#9301)
This saves memory, effort and locking.

Since every symbol is also added to postings, `Symbols()` can be
implemented there instead. This now has to build a map for
deduplication, but `Symbols()` is only called for compaction, and `gc()`
used to rebuild the symbols map after every compaction so not an
additional cost.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-09-08 14:48:48 +05:30
Callum Styan 93886d8417
Fix div by 0 panic is resize function. (#9286)
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2021-09-02 11:08:05 -07:00
Ganesh Vernekar 35b1a82594
Exemplars in snapshot (#9255)
* Exemplars in snapshot

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix lint

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Add docs

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix lint

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-30 19:34:38 +05:30
Levi Harrison 06afe6162c
Also ignore func1
Signed-off-by: Levi Harrison <git@leviharrison.dev>
2021-08-28 22:42:22 -04:00
Julien Pivotto d5676fb9e0
Merge pull request #9254 from prometheus/superq/go1.17
Build with Go 1.17 / npm 7 / node 16
2021-08-28 18:36:42 +02:00
SuperQ e167a45c65
Add new Go build tags.
Add new go:build comments based on 1.17 formatting[0].

[0]: https://golang.org/doc/go1.17#gofmt

Signed-off-by: SuperQ <superq@gmail.com>
2021-08-27 10:24:14 +02:00
Callum Styan cc55e57c1b
Fix a data race in the loadWAL function caused by reusing the same error var in multiple goroutines (#9259)
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2021-08-27 11:49:34 +05:30
Ganesh Vernekar 8a5d8c15e3
Do not replay checkpoint if it is covered by snapshot (#9226)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-25 21:48:55 +05:30
Marco Pracucci 6c38feef8e
Fixed BenchmarkLoadWAL
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-25 16:44:53 +02:00
Marco Pracucci 9537ca8b84
Merged upstream
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-25 16:30:05 +02:00
Marco Pracucci 175efada22
Avoid deadlock when processing duplicate series record (#9170) (#8)
* Avoid deadlock when processing duplicate series record

`processWALSamples()` needs to be able to send on its output channel
before it can read the input channel, so reads to allow this in case the
output channel is full.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* processWALSamples: update comment

Previous text seems to relate to an earlier implementation.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Co-authored-by: Bryan Boreham <bjboreham@gmail.com>
2021-08-24 09:01:04 +02:00
Marco Pracucci fbe211d3a8
Do not break exported functions signatures (#6)
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-20 18:37:47 +02:00
Bryan Boreham 9dfdc3eb36
Speed up BenchmarkPostings_Stats (#9213)
The previous code re-used series IDs 1-1000 many times over, so a lot
of time was spent ensuring the lists of series were in ascending order.
The intended use of `MemPostings.Add()` is that all series IDs are
unique, and changing the benchmark to do this lets it finish ten times
faster.

(It doesn't affect the benchmark result, just the setup code)

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-08-18 10:27:21 +01:00
Ganesh Vernekar 328a74ca36
Fix bugs and add enhancements to the chunk snapshot (#9185)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-17 18:08:16 +01:00
Marco Pracucci 481299f4a5
Added series hash cache support to TSDB (#5)
* Added series hash cache support to TSDB

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed imports grouping

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-17 13:31:08 +00:00
Marco Pracucci a3ad212658
Improve sharding tests and comments (#4)
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-13 15:16:03 +02:00
Jupiter 84ab705318
32 should better be replaced by "symbolFactor" (#9203)
Signed-off-by: tanghengjian <1040104807@qq.com>
2021-08-13 16:38:53 +05:30
Marco Pracucci 84e786ebc1
Fixed Decoder.Series() error checking (#9201)
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-13 16:11:41 +05:30
Marco Pracucci 7b0d798332
Added sharding support to Querier.Select() (#1)
* Added sharding support to Querier.Select()

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Addressed review comments

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2021-08-13 12:37:41 +02:00
Julien Pivotto cab96a06ef
Merge release 2.29 in main (#9196)
* PromQL: Fix start and end keywords masking label and metric names

This commit fixes an issue with the "at modifier" that introduced two
new keywords: `start` and `end`. In grouping options and in metric
names, these keywords took precedence over metric or label names, so
that those metrics and labels could no longer be referenced.

Signed-off-by: Clayton Peters <clayton.peters@man.com>

* Add in additional tests for metrics and/or labels called start/end.

Signed-off-by: Clayton Peters <clayton.peters@man.com>

* *: Cut 2.29.0-rc.0

Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>

* VERSION: bump to 2.29.0-rc.0

Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>

* Remove experimental wording on size-based retention

Followup of #9004

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>

* Fix PR reference in changelog

Signed-off-by: George Brighton <george@gebn.co.uk>

* Describe EC2 availability zone IDs at most once per refresh (#9142)

Signed-off-by: George Brighton <george@gebn.co.uk>

* Describe EC2 availability zones at most once per SD load

Closes #9142.

Signed-off-by: George Brighton <george@gebn.co.uk>

* Incorporate feedback

Signed-off-by: George Brighton <george@gebn.co.uk>

* Integrate feedback

Signed-off-by: George Brighton <george@gebn.co.uk>

* Add a compatibility note for macOS users.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>

* *: Cut v2.29.0-rc.1

Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>

* Fix `kuma_sd` targetgroup reporting (#9157)

* Bundle all xDS targets into a single group

Signed-off-by: austin ce <austin.cawley@gmail.com>

* *: cut v2.29.0-rc.2

Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>

* Rename links

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* bump codemirror-promql to 0.17.0

Signed-off-by: Augustin Husson <husson.augustin@gmail.com>

* *: cut v2.29.0

Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>

* tsdb: align atomically accessed int64 (#9192)

This prevents a panic in 32-bit archs:
https://pkg.go.dev/sync/atomic#pkg-note-BUG

Fixed #9190

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>

* Release 2.29.1 (#9193)

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>

Co-authored-by: Clayton Peters <clayton.peters@man.com>
Co-authored-by: Frederic Branczyk <fbranczyk@gmail.com>
Co-authored-by: George Brighton <george@gebn.co.uk>
Co-authored-by: Austin Cawley-Edwards <austin.cawley@gmail.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Augustin Husson <husson.augustin@gmail.com>
2021-08-12 18:38:06 +02:00
Bryan Boreham 040ef175eb
Optimise WAL loading by removing extra map and caching min-time (#9160)
* BenchmarkLoadWAL: close WAL after use

So that goroutines are stopped and resources released

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* BenchmarkLoadWAL: make series IDs co-prime with #workers

Series are distributed across workers by taking the modulus of the
ID with the number of workers, so multiples of 100 are a poor choice.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* BenchmarkLoadWAL: simulate mmapped chunks

Real Prometheus cuts chunks every 120 samples, then skips those samples
when re-reading the WAL. Simulate this by creating a single mapped chunk
for each series, since the max time is all the reader looks at.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Fix comment

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Remove series map from processWALSamples()

The locks that is commented to reduce contention in are now sharded
32,000 ways, so won't be contended. Removing the map saves memory and
goes just as fast.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* loadWAL: Cache the last mmapped chunk time

So we can skip calling append() for samples it will reject.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Improvements from code review

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Full stops and capitals on comments

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Cache max time in both places mmappedChunks is updated

Including refactor to extract function `setMMappedChunks`, to reduce
code duplication.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Update head min/max time when mmapped chunks added

This ensures we have the correct values if no WAL samples are added for
that series.

Note that `mSeries.maxTime()` was always `math.MinInt64` before, since
that function doesn't consider mmapped chunks.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-08-10 14:53:31 +05:30
Bryan Boreham 7407457243
Avoid deadlock when processing duplicate series record (#9170)
* Avoid deadlock when processing duplicate series record

`processWALSamples()` needs to be able to send on its output channel
before it can read the input channel, so reads to allow this in case the
output channel is full.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* processWALSamples: update comment

Previous text seems to relate to an earlier implementation.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-08-10 13:32:42 +05:30
Ganesh Vernekar ee7e0071d1
Snapshot in-memory chunks on shutdown for faster restarts (#7229)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-06 17:51:01 +01:00
Ganesh Vernekar 848cb5a6d6
Enhanced WAL replay for duplicate series record (#7438)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-03 20:03:54 +05:30
Ganesh Vernekar 8002a3ab80
Breakdown tsdb/head.go into multiple files (#9147)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-03 14:14:26 +02:00
jinglina 1a430e5f89
remove redundant parentheses (#9134)
Signed-off-by: jinglina <jinglinax@163.com>
2021-07-29 18:26:57 +05:30
Darshan Chaudhary c4f2e9eec5
Add present_over_time (#9097)
* Add present_over_time

Signed-off-by: darshanime <deathbullet@gmail.com>

* Add tests for present_over_time

Signed-off-by: darshanime <deathbullet@gmail.com>

* Address PR comments

Signed-off-by: darshanime <deathbullet@gmail.com>

* Add documentation for present_over_time

Signed-off-by: darshanime <deathbullet@gmail.com>

* Update documentation

Signed-off-by: darshanime <deathbullet@gmail.com>

* Update documentation comment

Signed-off-by: darshanime <deathbullet@gmail.com>
2021-07-29 12:38:11 +02:00
Oleg Zaytsev f9482c5bf6
Clarify computeChunkEndTime's purpose (#9049)
I was struggling to understand the purpose of this method until I
tweaked the tests, so I decided to write down my observations.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
2021-07-28 18:39:05 +05:30
Bryan Boreham 60804c5a09
remote_write: reduce blocking from garbage-collect of series (#9109)
* Refactor: pass segment-reading function as param

To allow a different implementation to be used when garbage-collecting.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* remote_write: reduce blocking from GC of series

Add a method `UpdateSeriesSegment()` which is used together with
`SeriesReset()` to garbage-collect old series. This allows us to
split the lock around queueManager series data and avoid blocking
`Append()` while reading series from the last checkpoint.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Cosmetic: review feedback on comments

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* remote-write benchmark: include GC of series

Reduce the total number of samples per iteration from 5000*5000
(25 million) which is too big for my laptop, to 1*10000.

Extend `createTimeseries()` to add additional labels, so that the
queue manager is doing more realistic work.

Move the Append() call to a background goroutine - this works because
TestWriteClient uses a WaitGroup to signal completion.

Call `StoreSeries()` and `SeriesReset()` while adding samples, to
simulate the garbage-collection that wal.Watcher does.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Change BenchmarkSampleDelivery to call UpdateSeriesSegment

This matches what Watcher.garbageCollectSeries() is doing now

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-07-27 13:21:48 -07:00
Bryan Boreham dea37853d9
tsdb: use dennwc/varint to speed up WAL decoding (#9106)
* tsdb: use dennwc/varint to speed up decoding

This is a tiny library, MIT-licensed, which unrolls the loop to go
about twice as fast.

Needed to copy the sign-inverting logic inline, previously provided by
the `binary` package.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* More comments to explain varint decoding

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-07-27 10:02:57 +05:30
Bryan Boreham 6788760efa
Reduce memory allocation in benchmarkIterator() (#5983)
Previously it was allocating millions of chunks, all containing the
same 250 samples.  Above some ratio of CPU performance to available
memory, the benchmark cannot run.

Make 250 a const and just allocate one chunk which we iterate
repeatedly till we reach the benchmark count.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-07-26 19:36:54 +05:30
Oleg Zaytsev b1ed4a0a66
LabelNames API with matchers (#9083)
* Push the matchers for LabelNames all the way into the index.

NB This doesn't actually implement it in the index, just plumbs it through for now...

Signed-off-by: Tom Wilkie <tom@grafana.com>

* Hack it up.  Does not work.

Signed-off-by: Tom Wilkie <tom@grafana.com>

* Revert changes I don't understand

Can't see why do we need to hold a mutex on symbols, and the purpose of
the LabelNamesFor method.

Maybe I'll need to re-add this later.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Implement LabelNamesFor

This method provides the label names that appear in the postings
provided. We do that deeper than the label values because we know
beforehand that most of the label names we'll be the same across
different postings, and we don't want to go down an up looking up the
same symbols for all different series.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Mutex on symbols should be unlocked

However, I still don't understand why do we need a mutex here.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Fix head.LabelNamesFor

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Implement mockIndex LabelNames with matchers

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Nitpick on slice initialisation

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Add tests for LabelNamesWithMatchers

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Fix the mutex mess on head.LabelValues/LabelNames

I still don't see why we need to grab that unrelated mutex, but at least
now we're grabbing it consistently

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Check error after iterating postings

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Use the error from posting when there was en error in postings

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Update storage/interface.go comment

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

* Update tsdb/index/index.go comment

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

* Update tsdb/index/index.go wrapped error msg

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

* Update tsdb/index/index.go wrapped error msg

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

* Update tsdb/index/index.go warpped error msg

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>

* Remove unneeded comment

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Add testcases for LabelNames w/matchers in api.go

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Use t.Cleanup() instead of defer in tests

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

Co-authored-by: Tom Wilkie <tom@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-20 18:08:08 +05:30
Jupiter 7337ecf0d3
Log when total symbol size exceeds 2^32 bytes. (#9104)
* Compaction fails when total symbol size exceeds 2^32 bytes.
Signed-off-by: tanghengjian <1040104807@qq.com>

* Compaction fails when total symbol size exceeds 2^32 bytes.

Signed-off-by: tanghengjian <1040104807@qq.com>

* Compaction fails when total symbol size exceeds 2^32 bytes.

Signed-off-by: root <tanghengjian@oppo.com>

Co-authored-by: root <tanghengjian@oppo.com>
2021-07-20 15:51:36 +05:30
Ganesh Vernekar 59d02b5ef0
tsdb: Block Head GC till pending readers are done reading (#9081)
* tsdb: Block Head GC till pending readers are done reading

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix review comments 2

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix the exclusiveness of maxt in WaitForPendingReadersInTimeRange

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-07-20 14:17:20 +05:30
Martin Disibio 1bcd13d6b5
Exemplar resize (#8974)
* Create experimental circular buffer resize method, benchmarks

Signed-off-by: Martin Disibio <mdisibio@gmail.com>

* Optimize exemplar resize to only replay as many exemplars as needed

Signed-off-by: Martin Disibio <mdisibio@gmail.com>

* More comments, benchmark AddExemplar

Signed-off-by: Martin Disibio <mdisibio@gmail.com>

* optimizations

Signed-off-by: Martin Disibio <mdisibio@gmail.com>

* comment

Signed-off-by: Martin Disibio <mdisibio@gmail.com>

* Slight refactor of resize benchmark + make use of resize via runtime
reloadable storage config.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Some more config related changes.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Address some review comments.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Address more review comments.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Refactor to remove usage of noopExemplarStorage and avoid race condition
when resizing from Head code.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix or add comments to clarify some of the new behaviour.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* fix potential panics related to negative exemplar buffer lengths

Signed-off-by: Callum Styan <callumstyan@gmail.com>

Co-authored-by: Callum Styan <callumstyan@gmail.com>
2021-07-20 10:22:57 +05:30