Commit graph

29 commits

Author SHA1 Message Date
gotjosh c10186eeea
BUGFIX: Mark the rule's restoration process as completed always (#14048)
* BUGFIX: Mark the rule's restoration process as completed always

In https://github.com/prometheus/prometheus/pull/13980 I introduced a change to reduce the number of queries executed when we restore alert statuses.

With this, the querying semantics changed as we now need to go through all series before we enter the alert restoration loop and I missed the fact that exiting early when there are no rules to restore would lead to an incomplete restoration.

An alert being restored is used as a proxy for "we're now ready to write `ALERTS/ALERTS_FOR_SERIES` metrics" so as a result we weren't writing the series if we didn't restore anything the first time around.
---------

Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-05-03 14:23:46 +01:00
gotjosh 379dec9d36
querier.Select cannot return a nil series set.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-30 13:09:30 +01:00
gotjosh ccfafae36d
Rename QueryforStateSeries to QueryForStateSeries
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-30 12:19:18 +01:00
gotjosh 2de2fee035
Allow the result map for the series set before hand with a hint.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 19:10:34 +01:00
gotjosh 6cfc584308
- Add a changelog entry
- Improve variable name of the map produced by the series set

Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 19:02:47 +01:00
gotjosh fa75985c1c
Use the string representation of the labels instead of the hash
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 18:46:05 +01:00
gotjosh 276201598c
Fix tests and a bug with the series lookup logic.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 18:46:05 +01:00
gotjosh e6dcbd2e26
bug: nil check against the series set not errors
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 18:46:05 +01:00
gotjosh 4daaa59c08
Rule Manager: Only query once per alert rule when restoring alert state
Prometheus restores alert state between restarts and updates. For each rule, it looks at the alerts that are meant to be active and then queries the `ALERTS_FOR_STATE` series for _each_ alert within the rules.

If the alert rule has 120 instances (or series) it'll execute the same query with slightly different labels.

This PR changes the approach so that we only query once per alert rule and then match the corresponding alert that we're about to restore against the series-set. While the approach might use a bit more memory at start-up (if even?) the restore proccess is only ran once per restart so I'd consider this a big win.

This builds on top of #13974

Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 18:46:05 +01:00
gotjosh 5beb2fe005
Improve the metric description
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 15:24:35 +01:00
gotjosh 381a77ac1e
Change variable name to restoreStartTime from now and introduce a log line to record total time
Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-24 14:21:11 +01:00
gotjosh e7219e3d36
Rule Manager: Add rule_group_last_restore_duration_seconds to measure restore time per rule group
When a rule group changes or prometheus is restarted we need to ensure we restore the active alerts that were firing for a corresponding rule, for that Prometheus uses the `ALERTS_FOR_STATE` series to query the previous state and restore it. If a given rule has high cardinality (think 100s of 1000s for series) this proccess can take a bit of time - this is the first of a series of PRs to improve this problem and I'd like to start with exposing the time it takes to restore a rule group as a gauge.

Signed-off-by: gotjosh <josue.abreu@gmail.com>
2024-04-23 09:57:08 +01:00
Łukasz Mierzwa 3bb27c33e9 Use consistent keys for logs
Rule warnings are logged with numDropped=N while every other component uses num_dropped=N:

```
notifier/notifier.go:		level.Warn(n.logger).Log("msg", "Alert batch larger than queue capacity, dropping alerts", "num_dropped", d)
notifier/notifier.go:		level.Warn(n.logger).Log("msg", "Alert notification queue full, dropping alerts", "num_dropped", d)
storage/remote/write_handler.go:		_ = level.Warn(h.logger).Log("msg", "Error on ingesting out-of-order exemplars", "num_dropped", outOfOrderExemplarErrs)
rules/group.go:				level.Warn(logger).Log("msg", "Error on ingesting out-of-order result from rule evaluation", "num_dropped", numOutOfOrder)
rules/group.go:				level.Warn(logger).Log("msg", "Error on ingesting too old result from rule evaluation", "num_dropped", numTooOld)
rules/group.go:				level.Warn(logger).Log("msg", "Error on ingesting results from rule evaluation with different value but same timestamp", "num_dropped", numDuplicates)
scrape/scrape.go:		level.Warn(sl.l).Log("msg", "Error on ingesting out-of-order samples", "num_dropped", appErrs.numOutOfOrder)
scrape/scrape.go:		level.Warn(sl.l).Log("msg", "Error on ingesting samples with different value but same timestamp", "num_dropped", appErrs.numDuplicates)
scrape/scrape.go:		level.Warn(sl.l).Log("msg", "Error on ingesting samples that are too old or are too far into the future", "num_dropped", appErrs.numOutOfBounds)
scrape/scrape.go:		level.Warn(sl.l).Log("msg", "Error on ingesting out-of-order exemplars", "num_dropped", appErrs.numExemplarOutOfOrder)
```

Rename numDropped to num_dropped for consistency.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-03-21 15:59:20 +00:00
machine424 f477e0539a
Move from golang.org/x/exp/slices into slices now that we only support Go >= 1.21
Prevent adding back golang.org/x/exp/slices.

Signed-off-by: machine424 <ayoubmrini424@gmail.com>
2024-02-28 14:54:53 +01:00
Bryan Boreham c0e36e6bb3 Standardise exemplar label as "trace_id"
This is consistent with the OpenTelemetry standard, and an example in OpenMetrics.

https://github.com/open-telemetry/opentelemetry-specification/blob/89aa01348139/specification/metrics/data-model.md#exemplars
https://github.com/OpenObservability/OpenMetrics/blob/138654493130/specification/OpenMetrics.md#exemplars-1

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-15 14:20:08 +00:00
Marco Pracucci 5ee3fbe825
Decouple ruler dependency controller from concurrency controller
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-02-02 10:06:37 +01:00
Marco Pracucci cbbbd6e70a
Remove superfluous nil check in Group.metrics
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-01-29 10:21:57 +01:00
Marco Pracucci 046cd7599f
Introduced sequentialRuleEvalController
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-01-29 10:19:18 +01:00
Marco Pracucci 21a03dc018
Simplify the design to update concurrency controller once the rule evaluation has done
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2024-01-29 10:16:31 +01:00
Danny Kopping 7aa3b10c3f
Block until all rules, both sync & async, have completed evaluating
Updated & added tests
Review feedback nits
Return empty map if not indeterminate
Use highWatermark to track inflight requests counter
Appease the linter
Clarify feature flag

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:08:41 +01:00
Danny Kopping f922534c4d
Refactoring for performance, and to allow controller to be overridden
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:08:41 +01:00
Danny Kopping 94cdfa30cd
Refactoring
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:08:41 +01:00
Danny Kopping 0dc7036db3
Optimising dependencies/dependents funcs to not produce new slices each request
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:08:41 +01:00
Danny Kopping e7758d187e
Refactor concurrency control
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:08:39 +01:00
Danny Kopping 940f83a540
Implementation
NOTE:
Rebased from main after refactor in #13014

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2024-01-29 10:07:15 +01:00
Charles Korn 9a8dbf06bc
Address PR feedback
Co-authored-by: Julien Pivotto <roidelapluie@o11y.eu>
Signed-off-by: Charles Korn <charleskorn@users.noreply.github.com>
2023-10-31 09:56:05 +11:00
Charles Korn 667a1efb04
Add trace ID to log lines emitted during rule evaluation
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-10-26 16:14:54 +11:00
Charles Korn fc132a4557
Use common logger instance to reduce duplication in Group.Eval()
Signed-off-by: Charles Korn <charles.korn@grafana.com>
2023-10-26 16:14:12 +11:00
Danny Kopping 498b836654
Refactoring manager.go into separate concerns
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
2023-10-21 11:11:11 +02:00