* add context cancellation check at get series iteration
* add warnings and closer on error
* add test
---------
Signed-off-by: Erlan Zholdubai uulu <erlanz@amazon.com>
After #13771, the list for specific parts of the codebase looks like
it is part of the "general maintainers" list. This commit makes things
clearer.
Signed-off-by: beorn7 <beorn@grafana.com>
Use `docker-repo-name` as the make target so that a repo Makfile can
override the common target implementation.
Signed-off-by: SuperQ <superq@gmail.com>
Since we skip repos that don't have a Dockerfile, we can force sync the
`.github/workflows/container_description.yml` config.
Signed-off-by: SuperQ <superq@gmail.com>
I was bored on a train and I spent some amount of time trying to scratch
some nanoseconds off the Labels.Compare when running with stringlabels.
I would be ashamed to admit the real amount of time I spent on it.
The worst thing is, I can't really explain why this is performing so
much better, and someone should re-run the benchmarks on their machine
to confirm that it's not something related to general relativity because
the train is moving. I also added some extra real-life benchmark cases
with longer labelsets (these aren't the longest we have in production,
but kubernetes labelsets are fairly common in Prometheus so I thought it
would be nice to have them).
My benchmarks show this diff:
goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/model/labels
│ old │ new │
│ sec/op │ sec/op vs base │
Labels_Compare/equal 5.898n ± 0% 5.875n ± 1% -0.40% (p=0.037 n=10)
Labels_Compare/not_equal 11.78n ± 2% 11.01n ± 1% -6.54% (p=0.000 n=10)
Labels_Compare/different_sizes 4.959n ± 1% 4.906n ± 2% -1.05% (p=0.050 n=10)
Labels_Compare/lots 21.32n ± 0% 17.54n ± 5% -17.75% (p=0.000 n=10)
Labels_Compare/real_long_equal 15.06n ± 1% 14.92n ± 0% -0.93% (p=0.000 n=10)
Labels_Compare/real_long_different_end 25.20n ± 0% 24.43n ± 0% -3.04% (p=0.000 n=10)
geomean 11.86n 11.25n -5.16%
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Also remove myself :)
Arve has been doing a lot of maintainance on the upstream component
and also reviewing PRs on Prometheus for this otlp ingest. I continue
to have less and less time for this, so I'd like make Arve a maintainer
for OTLP Ingestion.
Signed-off-by: Goutham <gouthamve@gmail.com>
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that. This change adds a test for that and fixes the scrape loop so that:
* Only first sample for each unique time series is appended
* Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric
This allows one to identify such scrape jobs and targets.
Also fix some tests and benchmark.
I have seen prometheis instances misebehaving because of broken chinked remote
read requests.
In order to avoid OOM's when this happens, I propose to close the
queries used by the streamed remote read requests earlier.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Add the container_description.yml workflow to the repo file sync script.
* Skip sync if there is no Dockerfile.
* Fixup minor typo in container_description.yml.
Signed-off-by: SuperQ <superq@gmail.com>