Two issues are fixed here, that lead to the same problem:
1. If `newSampleRing` is called with an unknown ValueType including
ValueNone, we have initialized the interface buffer (`iBuf`).
However, we would still use a specialized buffer for the first
sample, opportunistically assuming that we might still not
encounter mixed samples and we should go down the more efficient
road.
2. If the `sampleRing` is `reset`, we leave all buffers alone,
including `iBuf`, which is generally fine, but not for `iBuf`, see
below.
In both cases, `iBuf` already contains values, but we will fill one of
the specialized buffers first. Once we then actually encounter mixed
samples, the content of the specialized buffer is copied into `iBuf`
using `append`. That's by itself the right idea because `iBuf` might
be `nil`, and even if not, it might or might not have the right
capacity. However, this approach assumes that `iBuf` is empty, or more
precisely has a length of zero.
This commit makes sure that `iBuf` does not get needlessly initialized
in `newSampleRing` and that it is emptied upon `reset`.
A test case is added to demonstrate both issues above.
Signed-off-by: beorn7 <beorn@grafana.com>
* Remove NewPossibleNonCounterInfo until it can be made more efficient, and avoid creating empty annotations as much as possible
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
build(deps): bump github.com/prometheus/prometheus from 0.45.0 to 0.47.2 in /documentation/examples/remote_storage
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Improve promtool tsdb analyze
- Make it more suitable for variable size float chunks.
- Add support for histogram chunks.
---------
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
We don't need the buffer to read the response until the scrape http call
returns; creating it earlier makes the buffer pool larger.
I split `scrape()` into `scrape()` which returns with the http response,
and `readResponse()` which decompresses and copies the data into the
supplied buffer. This design was chosen to minimize impact on the logic.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Reverts change from https://github.com/prometheus/prometheus/pull/12906
The benchmarks show that it's slower when intersecting, which is a
common usage for ListPostings (when intersecting matchers from Head)
(old is before #12906, new is #12906):
│ old │ new │
│ sec/op │ sec/op vs base │
Intersect/LongPostings1-16 20.54µ ± 1% 21.11µ ± 1% +2.76% (p=0.000 n=20)
Intersect/LongPostings2-16 51.03m ± 1% 52.40m ± 2% +2.69% (p=0.000 n=20)
Intersect/ManyPostings-16 194.2m ± 3% 332.1m ± 1% +71.00% (p=0.000 n=20)
geomean 5.882m 7.161m +21.74%
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It's implicit, but should be explicit. It is invalid to call At() after
a failed call to Next() or Seek().
Following up on https://github.com/prometheus/prometheus/pull/12906
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Show group interval in Rules display
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
* Humanise interval
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
---------
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
Instead of setting to nil and allocating a new slice every time the
merge is advanced, re-use the previous slice.
This is safe because the `currentSets` member is only used inside member
functions, and explicitly copied in `At()`, the only place it leaves the
struct.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
`lset` escapes to heap due to being passed through the text-parser
interface, so we can reduce garbage by hoisting it out of the loop so
only one allocation is done for every series in a scrape.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>