This adds support for Consul's Catalog [List Services][^1] API's `filter`
parameter added in 1.14.x. This parameter grants the operator more
flexibility to do server-side filtering of the Catalog, before
Prometheus subscribes for updates. Operators can use this to improve
both the performance of Prometheus's Consul SD and reduce the impact of
enumerating large catalogs.
[^1]: https://developer.hashicorp.com/consul/api-docs/v1.14.x/catalog
Signed-off-by: Daniel Kimsey <dekimsey@protonmail.com>
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that. This change adds a test for that and fixes the scrape loop so that:
* Only first sample for each unique time series is appended
* Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric
This allows one to identify such scrape jobs and targets.
Also fix some tests and benchmark.
I have seen prometheis instances misebehaving because of broken chinked remote
read requests.
In order to avoid OOM's when this happens, I propose to close the
queries used by the streamed remote read requests earlier.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Add the container_description.yml workflow to the repo file sync script.
* Skip sync if there is no Dockerfile.
* Fixup minor typo in container_description.yml.
Signed-off-by: SuperQ <superq@gmail.com>
Use a stack buffer to reduce memory allocations.
`Write(AppendQuote(AvailableBuffer` does not allocate or copy when
the buffer has sufficient space.
Also add a benchmark, with some refactoring.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The purpose of running with a previous Go version is to spot usage of
new language features; we don't need to intensively look for bugs.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Avoids possible false sharing between loops.
Plausibly there is no problem in the current code, but it's easy enough to write it more safely.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is a bit tough to explain, but I'll try:
`rate` & friends have a sophisticated extrapolation algorithm.
Usually, we extrapolate the result to the total interval specified in
the range selector. However, if the first sample within the range is
too far away from the beginning of the interval, or if the last sample
within the range is too far away from the end of the interval, we
assume the series has just started half a sampling interval before the
first sample or after the last sample, respectively, and shorten the
extrapolation interval correspondingly. We calculate the sampling
interval by looking at the average time between samples within the
range, and we define "too far away" as "more than 110% of that
sampling interval".
However, if this algorithm leads to an extrapolated starting value
that is negative, we limit the start of the extrapolation interval to
the point where the extrapolated starting value is zero.
At least that was the intention.
What we actually implemented is the following: If extrapolating all
the way to the beginning of the total interval would lead to an
extrapolated negative value, we would only extrapolate to the zero
point as above, even if the algorithm above would have selected a
starting point that is just half a sampling interval before the first
sample and that starting point would not have an extrapolated negative
value. In other word: What was meant as a _limitation_ of the
extrapolation interval yielded a _longer_ extrapolation interval in
this case.
There is an exception to the case just described: If the increase of
the extrapolation interval is more than 110% of the sampling interval,
we suddenly drop back to only extrapolate to half a sampling interval.
This behavior can be nicely seen in the testcounter_zero_cutoff test,
where the rate goes up all the way to 0.7 and then jumps back to 0.6.
This commit changes the behavior to what was (presumably) intended
from the beginning: The extension of the extrapolation interval is
only limited if actually needed to prevent extrapolation to negative
values, but the "limitation" never leads to _more_ extrapolation
anymore.
The difference is subtle, and probably it never bothered anyone.
However, if you calculate a rate of a classic histograms, the old
behavior might create non-monotonic histograms as a result (because of
the jumps you can see nicely in the old version of the
testcounter_zero_cutoff test). With this fix, that doesn't happen
anymore.
Signed-off-by: beorn7 <beorn@grafana.com>