While empty buckets can make sense in the internal representation (by
joining spans that would otherwise need more overhead for separate
representation), there are no spans in the JSON rendering. Therefore,
the JSON should not contain any empty buckets, since any buckets not
included in the output counts as empty anyway.
This changes both the inefficient MarshalJSON implementation as well
as the jsoniter implementation.
Signed-off-by: beorn7 <beorn@grafana.com>
If the zero threshold overlaps with the highest negative bucket and/or
the lowest positive bucket, its upper or lower boundary, respectively,
has to be adjusted. In valid histograms, only ever the highest
negative bucket and/or the lowest positive bucket may overlap with the
zero bucket. This is assumed in this code to simplify the checks.
Signed-off-by: beorn7 <beorn@grafana.com>
* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir
Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
This now even enables jsoniter marshaling of Points in an instant
query (which previously used the traditional JSON marshaling).
Signed-off-by: beorn7 <beorn@grafana.com>
If FlushAndShutdown is called with a full batchQueue, and then Batch is
called rather than the normal path of reading from a queue a deadlock
might be encountered. Rather than having FlushAndShutdown having
blocking code while holding a lock retry sending the batch every second.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
* Add a test with variable samples rate append
This test overflows the chunk created in memseries, and the total amount
of samples in the (only) mmapped chunk is 29, instead of the 65565
appended ones.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Cut new chunk when rate prediction was wrong
When appending samples at a slow rate, and then appending at a higher
rate, the prediction we made to cut a new chunk is no longer valid.
Sometimes this can even cause an overflow in the chunk, if more samples
than uint16 can hold are appended.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Improve comment on 2*samplesPerChunk
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Assert that all chunks have less than 240 samples
Also, trigger new chunk at 240, not at more than 240
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* tsdb/agent: Ignore duplicate exemplars
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Make each exemplar unique in TestCommit
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Re-Trigger CI for Windows and UI-related steps
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Change test comment to properly re-trigger pipeline
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
* Defer Close() calls for test agent and segment reader
Signed-off-by: Paschalis Tsilias <paschalist0@gmail.com>
OTel Collector prints the following error when a target disappears:
```
2022-04-13T14:20:24.932-0400 warn scrape/scrape.go:1408 Stale append failed {"kind": "receiver", "name": "prometheus", "scrape_pool": "beep-boop", "target": "http://localhost:9090/metrics", "error": "transaction aborted"}
```
This `transaction aborted` error is returned by the custom appender that is
used by the collector when the context of the appender is cancelled:
b7bf11174e/receiver/prometheusreceiver/internal/otlp_transaction.go (L81-L82)
We call `endOfRunStaleness` after `sl.stop()` which cancels `sl.ctx`.
The other `.Appender()` calls use `parentCtx` for the same reason.
This hasn't come up so far because Prometheus' Appender implementation just
ignores the context passed.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
Storing the scrape cache and the target (which also contains that cache)
is apparently causing hige memory increase. I think me might not control
the lifespan of the context enough, therefore old objects keep living in
memory for longer than needed.
Let's unblock the release and look for an alternative so that downstream
consumers can get access to that data.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* tsdb/agent: port grafana/agent#676grafana/agent#676 fixed an issue where a loading a WAL with multiple
segments may result in ref ID collision. The starting ref ID for new
series should be set to the highest ref ID across all series records
from all WAL segments.
This fixes an issue where the starting ref ID was incorrectly set to the
highest ref ID found in the newest segment, which may not have any ref
IDs at all if no series records have been appended to it yet.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* tsdb/agent: update terminology (s/ref ID/nextRef)
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
This will make UI static files part of the release artefacts, for
consumption by downstream distributions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>