Up until this point, if a scrape was done with the protobuf format Prometheus would always try to ingest native histograms even with the feature flag disabled. This causes problems with other feature-flags that depend on the protobuf format, like 'created-timestamp-zero-ingestion'. This commit decouples native histogram parsing from ingestion, making sure ingestion only happens when the 'native-histogram' feature-flag is enabled.
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
Prometheus restores alert state between restarts and updates. For each rule, it looks at the alerts that are meant to be active and then queries the `ALERTS_FOR_STATE` series for _each_ alert within the rules.
If the alert rule has 120 instances (or series) it'll execute the same query with slightly different labels.
This PR changes the approach so that we only query once per alert rule and then match the corresponding alert that we're about to restore against the series-set. While the approach might use a bit more memory at start-up (if even?) the restore proccess is only ran once per restart so I'd consider this a big win.
This builds on top of #13974
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Currently, running promtool tsdb analyze with the --extended flag
will cause an 'index out of range' error if running it
against a block that does not have any native histogram chunks.
This change ensures that promtool won't try to display data that doesn't exist.
Signed-off-by: Will Hegedus <whegedus@linode.com>
When a rule group changes or prometheus is restarted we need to ensure we restore the active alerts that were firing for a corresponding rule, for that Prometheus uses the `ALERTS_FOR_STATE` series to query the previous state and restore it. If a given rule has high cardinality (think 100s of 1000s for series) this proccess can take a bit of time - this is the first of a series of PRs to improve this problem and I'd like to start with exposing the time it takes to restore a rule group as a gauge.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
This commit adds 2 new metadata labels for the endpointslice role:
* `__meta_kubernetes_endpointslice_endpoint_node_name`
* `__meta_kubernetes_endpointslice_endpoint_zone`
The latter is only present when the `discovery.k8s.io/v1` API group is
available.
I also updated the configuration doc and added an entry for the
`__meta_kubernetes_endpointslice_endpoint_hostname` label which was
missing.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
use it in loadDataAsQueryable to make sure the RO Head doesn't truncate or cut new chunks in data/chunks_head/.
add a -sandbox-dir-root flag to "promtool tsdb dump/dump-openmetrics" to control the root of that sandbox dirrectory.
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
In a previous PR, the generated parser was created using an old version of goyacc.
Also adds -l to disable line directives, which fixes debug processing and reduces diffs at the expense of making it more difficult to reason about the generated output.
Signed-off-by: Owen Williams <owen.williams@grafana.com>
includes Inf and NaN as numbers to histogram
---------
Signed-off-by: Neeraj Gartia <neerajgartia211002@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
Thanos can create and destroy TSDBs dynamically, and once a TSDB
disappears its files are deleted. Calculating the size of the
WAL then fails with errors like:
```
msg: "Failed to calculate size of "wal" dir", "err": "lstat
/tsdbdir/wal: no such file or directory", "caller": "wlog.go:271"
```
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>