I was bored on a train and I spent some amount of time trying to scratch
some nanoseconds off the Labels.Compare when running with stringlabels.
I would be ashamed to admit the real amount of time I spent on it.
The worst thing is, I can't really explain why this is performing so
much better, and someone should re-run the benchmarks on their machine
to confirm that it's not something related to general relativity because
the train is moving. I also added some extra real-life benchmark cases
with longer labelsets (these aren't the longest we have in production,
but kubernetes labelsets are fairly common in Prometheus so I thought it
would be nice to have them).
My benchmarks show this diff:
goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/model/labels
│ old │ new │
│ sec/op │ sec/op vs base │
Labels_Compare/equal 5.898n ± 0% 5.875n ± 1% -0.40% (p=0.037 n=10)
Labels_Compare/not_equal 11.78n ± 2% 11.01n ± 1% -6.54% (p=0.000 n=10)
Labels_Compare/different_sizes 4.959n ± 1% 4.906n ± 2% -1.05% (p=0.050 n=10)
Labels_Compare/lots 21.32n ± 0% 17.54n ± 5% -17.75% (p=0.000 n=10)
Labels_Compare/real_long_equal 15.06n ± 1% 14.92n ± 0% -0.93% (p=0.000 n=10)
Labels_Compare/real_long_different_end 25.20n ± 0% 24.43n ± 0% -3.04% (p=0.000 n=10)
geomean 11.86n 11.25n -5.16%
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Also remove myself :)
Arve has been doing a lot of maintainance on the upstream component
and also reviewing PRs on Prometheus for this otlp ingest. I continue
to have less and less time for this, so I'd like make Arve a maintainer
for OTLP Ingestion.
Signed-off-by: Goutham <gouthamve@gmail.com>
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that. This change adds a test for that and fixes the scrape loop so that:
* Only first sample for each unique time series is appended
* Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric
This allows one to identify such scrape jobs and targets.
Also fix some tests and benchmark.
I have seen prometheis instances misebehaving because of broken chinked remote
read requests.
In order to avoid OOM's when this happens, I propose to close the
queries used by the streamed remote read requests earlier.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Add the container_description.yml workflow to the repo file sync script.
* Skip sync if there is no Dockerfile.
* Fixup minor typo in container_description.yml.
Signed-off-by: SuperQ <superq@gmail.com>
Use a stack buffer to reduce memory allocations.
`Write(AppendQuote(AvailableBuffer` does not allocate or copy when
the buffer has sufficient space.
Also add a benchmark, with some refactoring.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The purpose of running with a previous Go version is to spot usage of
new language features; we don't need to intensively look for bugs.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Avoids possible false sharing between loops.
Plausibly there is no problem in the current code, but it's easy enough to write it more safely.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>