This moves prometheus_ready to the web package and links it with the ready variable that decides if HTTP requests should return 200 or 503.
This is a follow up change from #10682
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
When Prometheus starts it can take a long time before WAL is replayed and it can do anything useful. While it's starting it exposes metrics and other Prometheus servers can scrape it.
We do have alerts that fire if any Prometheus server is not ingesting samples and so far we've been dealing with instances that are starting for a long time by adding a check on Prometheus process uptime. Relying on uptime isn't ideal because the time needed to start depends on the number of metrics scraped, and so on the amount of data in WAL.
To help write better alerts it would be great if Prometheus exposed a metric that tells us it's fully started, that way any alert that suppose to notify us about any runtime issue can filter out starting instances.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* promtool: support matchers when querying label values
Signed-off-by: Ben Ye <ben.ye@bytedance.com>
* address review comment
Signed-off-by: Ben Ye <ben.ye@bytedance.com>
During shutdown TSDB is stopped before rule manager is stopped. Since TSDB shutdown can take a long time (minutes or 10s of minutes) it keeps rule manager running while parts of Prometheus are already stopped (most notebly scrape manager). This can cause false positive alerts to fire, mostly those that rely on absent() calls since new sample appends will stop while alert queries are still evaluated.
Stop rules before stopping TSDB and scrape manager to avoid this problem.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Since Prometheus documentation is versioned, do not write down that a
specific function was added in Prom 2.0, for consistency.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
We know the max size of our map so we can create it with that information and avoid extra allocations
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
This commit introduces a new metric to count the number of failed
requests to Linode's API when using Linode SD. Resolves#10672, inspired
by #10476.
_Note_: this doens't count failures when polling the `/account/events`
endpoint, as a `401` there is how we determine if the supplied token has
the needed API scopes to do event polling vs full refreshes each
interval.
Signed-off-by: TJ Hoplock <t.hoplock@gmail.com>
On macOS, the TestTombstoneCleanRetentionLimitsRace performs very
poorly. It takes more than a second to write out one block, and as it
writes 400 of them, we run into the 10-minute test timeout frequently.
While this doesn't fix the actual performance issue, breaking each
iteration into a subtest makes the test pass reliably (because each
iteration comfortably finishes in under a minute).
Related report: https://groups.google.com/g/prometheus-developers/c/jxQ6Ayg6VJ4/m/03H_DS9PDAAJ
Signed-off-by: Matthias Rampke <matthias@prometheus.io>
Relax indentation to allow non-indented sequences. This helps with
generated yaml linke the snmp_exporter config output.
Signed-off-by: SuperQ <superq@gmail.com>
"Labels is a sorted set of labels. Order has to be guaranteed upon
instantiation." says the comment, so fix all the tests that break this
rule.
For `BenchmarkLabelValuesWithMatchers()` and
`BenchmarkHeadLabelValuesWithMatchers()` the amount of work done changes
significantly if you put the labels in order, because all series refs
get neatly partitioned by the `tens` label, so I renamed the labels
to maintain the previous behaviour.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In order to make the synced version of golangci-lint workflow handle the
CGO in snmp_exporter/generator, add some package dependencies.
Signed-off-by: SuperQ <superq@gmail.com>
* Send target and metadata cache in context (again)
The previous attempt was rolled back in #10590 due to memory issues.
`sl.parentCtx` and `sl.ctx` both had a copy of the cache and target info
in the previous attempt and it was hard to pin-point where the context
was being retained causing the memory increase.
I've experimented a bunch in #10627 to figure out that this approach doesn't
cause memory increase. Beyond that, just using this info in _any_ other context
is causing a memory increase.
The change fixed a bunch of long-standing in the OTel Collector that the
community was waiting on and release is blocked on a few downstream distrubutions
of OTel Collector waiting on a fix. I propose to merge this change in while
I investigate what is happening.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Gate the change behind a manager option
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir
Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
If FlushAndShutdown is called with a full batchQueue, and then Batch is
called rather than the normal path of reading from a queue a deadlock
might be encountered. Rather than having FlushAndShutdown having
blocking code while holding a lock retry sending the batch every second.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>