So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.
This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.
Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:
1. Series created from classic _gauge_ histograms didn't get the
_sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.
Signed-off-by: beorn7 <beorn@grafana.com>
Introduces support for a new query parameter in the `/rules` API endpoint that allows filtering by rule names.
If all the rules of a group are filtered, we skip the group entirely.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
In other words: Instead of having a “polymorphous” `Point` that can
either contain a float value or a histogram value, use an `FPoint` for
floats and an `HPoint` for histograms.
This seemingly small change has a _lot_ of repercussions throughout
the codebase.
The idea here is to avoid the increase in size of `Point` arrays that
happened after native histograms had been added.
The higher-level data structures (`Sample`, `Series`, etc.) are still
“polymorphous”. The same idea could be applied to them, but at each
step the trade-offs needed to be evaluated.
The idea with this change is to do the minimum necessary to get back
to pre-histogram performance for functions that do not touch
histograms. Here are comparisons for the `changes` function. The test
data doesn't include histograms yet. Ideally, there would be no change
in the benchmark result at all.
First runtime v2.39 compared to directly prior to this commit:
```
name old time/op new time/op delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16 391µs ± 2% 542µs ± 1% +38.58% (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16 452µs ± 2% 617µs ± 2% +36.48% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16 1.12ms ± 1% 1.36ms ± 2% +21.58% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16 7.83ms ± 1% 8.94ms ± 1% +14.21% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16 2.98ms ± 0% 3.30ms ± 1% +10.67% (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16 3.66ms ± 1% 4.10ms ± 1% +11.82% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16 10.5ms ± 0% 11.8ms ± 1% +12.50% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16 77.6ms ± 1% 87.4ms ± 1% +12.63% (p=0.000 n=9+9)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16 30.4ms ± 2% 32.8ms ± 1% +8.01% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16 37.1ms ± 2% 40.6ms ± 2% +9.64% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16 105ms ± 1% 117ms ± 1% +11.69% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16 783ms ± 3% 876ms ± 1% +11.83% (p=0.000 n=9+10)
```
And then runtime v2.39 compared to after this commit:
```
name old time/op new time/op delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16 391µs ± 2% 547µs ± 1% +39.84% (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16 452µs ± 2% 616µs ± 2% +36.15% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16 1.12ms ± 1% 1.26ms ± 1% +12.20% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16 7.83ms ± 1% 7.95ms ± 1% +1.59% (p=0.000 n=10+8)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16 2.98ms ± 0% 3.38ms ± 2% +13.49% (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16 3.66ms ± 1% 4.02ms ± 1% +9.80% (p=0.000 n=10+9)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16 10.5ms ± 0% 10.8ms ± 1% +3.08% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16 77.6ms ± 1% 78.1ms ± 1% +0.58% (p=0.035 n=9+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16 30.4ms ± 2% 33.5ms ± 4% +10.18% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16 37.1ms ± 2% 40.0ms ± 1% +7.98% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16 105ms ± 1% 107ms ± 1% +1.92% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16 783ms ± 3% 775ms ± 1% -1.02% (p=0.019 n=9+9)
```
In summary, the runtime doesn't really improve with this change for
queries with just a few steps. For queries with many steps, this
commit essentially reinstates the old performance. This is good
because the many-step queries are the one that matter most (longest
absolute runtime).
In terms of allocations, though, this commit doesn't make a dent at
all (numbers not shown). The reason is that most of the allocations
happen in the sampleRingIterator (in the storage package), which has
to be addressed in a separate commit.
Signed-off-by: beorn7 <beorn@grafana.com>
This makes it more consistent with other command like import rules. We
don't have stricts rules and uniformity accross promtool unfortunately,
but I think it's better to only have the http config on relevant check
commands to avoid thinking Prometheus can e.g. check the config over the
wire.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Correct statement in docs about query results returning either floats or histograms but not both.
* Move documentation for range and instant vectors under their corresponding headings.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
This commit adds a new 'keep_firing_for' field to Prometheus alerting
rules. The 'resolve_delay' field specifies the minimum amount of time
that an alert should remain firing, even if the expression does not
return any results.
This feature was discussed at a previous dev summit, and it was
determined that a feature like this would be useful in order to allow
the expression time to stabilize and prevent confusing resolved messages
from being propagated through Alertmanager.
This approach is simpler than having two PromQL queries, as was
sometimes discussed, and it should be easy to implement.
This commit does not include tests for the 'resolve_delay' field. This
is intentional, as the purpose of this commit is to gather comments on
the proposed design of the 'resolve_delay' field before implementing
tests. Once the design of the 'resolve_delay' field has been finalized,
a follow-up commit will be submitted with tests."
See https://github.com/prometheus/prometheus/issues/11570
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Update docs example rules for default config
The prometheus download includes a default config to scrape itself.
This self-scraping prometheus doesn't include any metric named as
`http_inprogress_requests`, but does include one named
`prometheus_http_requests_total`.
Updating this example rule in the docs to one which can be used
out-of-the-box with the default download would be a nice improvement.
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
* Update syntax as per @LeviHarrison's review
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Since the struct defines proxy_connect_header instead of proxy_connect_headers, all relevant occurences of it were replaced with the correct configuration name as defined in the HTTPClientConfig struct.
Signed-off-by: Robbe Haesendonck <googleit@inuits.eu>
* Add API endpoints for getting scrape pool names
This adds api/v1/scrape_pools endpoint that returns the list of *names* of all the scrape pools configured.
Having it allows to find out what scrape pools are defined without having to list and parse all targets.
The second change is adding scrapePool query parameter support in api/v1/targets endpoint, that allows to
filter returned targets by only finding ones for passed scrape pool name.
Both changes allow to query for a specific scrape pool data, rather than getting all the targets for all possible scrape pools.
The problem with api/v1/targets endpoint is that it returns huge amount of data if you configure a lot of scrape pools.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Add a scrape pool selector on /targets page
Current targets page lists all possible targets. This works great if you only have a few scrape pools configured,
but for systems with a lot of scrape pools and targets this slow things down a lot.
Not only does the /targets page load very slowly in such case (waiting for huge API response) but it also take
a long time to render, due to huge number of elements.
This change adds a dropdown selector so it's possible to select only intersting scrape pool to view.
There's also scrapePool query param that will open selected pool automatically.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* docs: Add link to best practices in "Defining Recording Rules" page
Signed-off-by: John Carlo Roberto <10111643+Irizwaririz@users.noreply.github.com>
* docs: Improve wording
Signed-off-by: John Carlo Roberto <10111643+Irizwaririz@users.noreply.github.com>
Signed-off-by: John Carlo Roberto <10111643+Irizwaririz@users.noreply.github.com>
* Common client in EC2 and Lightsail
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Azure -> AWS
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
This PR updates the Serverset url at the configuration.md documentation.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Illustrate use of named capturing group syntax available from https://github.com/google/re2/wiki/Syntax and their usage in the replacement field
Signed-off-by: Guillaume Berche <guillaume.berche@orange.com>
Some frameworks issue HEAD requests to determine health.
This resolvesprometheus/prometheus#11159
Signed-off-by: Nicolas Dumazet <nicdumz.commits@gmail.com>
Signed-off-by: Nicolas Dumazet <nicdumz.commits@gmail.com>
Currently, it's not explicitly called out which permissions are needed
for service discovery of EC2 instances. It's not super hard to figure
out, but it did involve a bit of guesswork when I did it yesterday, and
I figure it's worth saving people that effort.
I've also seen examples around the internet where people are granting
much broader permissions than they need to, so maybe we can save on that
by explicitly saying what's needed.
Signed-off-by: Chris Sinjakli <chris@sinjakli.co.uk>
It's currently possible to use blackbox_exporter to probe MX records
themselves. However it's not possible to do an end-to-end test, like is
possible with SRV records. This makes it possible to use MX records as a
source of hostnames in the same way as SRV records.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
* Add /api/v1/format_query API endpoint for formatting queries
This uses the formatting functionality introduced in
https://github.com/prometheus/prometheus/pull/10544.
I've chosen "query" instead of "expr" in both the endpoint and parameter
names to stay consistent with the existing API endpoints. Otherwise, I
would have preferred to use the term "expr".
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Add docs for /api/v1/format_query endpoint
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Add note that formatting expressions removes comments
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Signed-off-by: Jonathan Stevens <jonathanstevens89@gmail.com>
Signed-off-by: Jonathan Stevens <jon.stevens@getweave.com>
Co-authored-by: Jonathan Stevens <jon.stevens@getweave.com>
- For now this is relatively simplistic, but at least acknowledges some
of the basics, and points out some parts that might not be obvious at first.
Relates to #7192
Signed-off-by: Harold Dost <h.dost@criteo.com>
* add description for __meta_kubernetes_endpoints_label_* and __meta_kubernetes_endpoints_labelpresent_*
Signed-off-by: renzheng.wang <wangrzneu@gmail.com>
This follow a simple function-based approach to access the count and
sum fields of a native Histogram. It might be more elegant to
implement “accessors” via the dot operator, as considered in the
brainstorming doc [1]. However, that would require the introduction of
a whole new concept in PromQL. For the PoC, we should be fine with the
function-based approch. Even the obvious inefficiencies (rate'ing a
whole histogram twice when we only want to rate each the count and the
sum once) could be optimized behind the scenes.
Note that the function-based approach elegantly solves the problem of
detecting counter resets in the sum of observations in the case of
negative observations. (Since the whole native Histogram is rate'd,
the counter reset is detected for the Histogram as a whole.)
We will decide later if an “accessor” approach is really needed. It
would change the example expression for average duration in
functions.md from
histogram_sum(rate(http_request_duration_seconds[10m]))
/
histogram_count(rate(http_request_duration_seconds[10m]))
to
rate(http_request_duration_seconds.sum[10m])
/
rate(http_request_duration_seconds.count[10m])
[1]: https://docs.google.com/document/d/1ch6ru8GKg03N02jRjYriurt-CZqUVY09evPg6yKTA1s/edit
Signed-off-by: beorn7 <beorn@grafana.com>
The Kubernetes service discovery can only add node labels to
targets from the pod role.
This commit extends this functionality to the endpoints and
endpointslices roles.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Since Prometheus documentation is versioned, do not write down that a
specific function was added in Prom 2.0, for consistency.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Provide a callout about the vector matching keywords and the group
modifiers.
Relates prometheus/docs#2106
Signed-off-by: Harold Dost <h.dost@criteo.com>
We always track total samples queried and add those to the standard set
of stats queries can report.
We also allow optionally tracking per-step samples queried. This must be
enabled both at the engine and query level to be tracked and rendered.
The engine flag is exposed via a Prometheus feature flag, while the
query flag is set when stats=all.
Co-authored-by: Alan Protasio <approtas@amazon.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Update the wording of the `labelmap` relabel action to make it more
clear that it acts on all the label names, rather than the list provided
by source_labels.
Signed-off-by: SuperQ <superq@gmail.com>
* Fix documentation for Docker API filters
Signed-off-by: Robert Jacob <xperimental@solidproject.de>
* Undo indentation change
Signed-off-by: Robert Jacob <xperimental@solidproject.de>
* Added pathPrefix function to the template reference documentation
Signed-off-by: David N Perkins <David.N.Perkins@ibm.com>
* PR suggestions
Signed-off-by: David N Perkins <David.N.Perkins@ibm.com>
* Switch wording back
Signed-off-by: David N Perkins <David.N.Perkins@ibm.com>
* Followup on OpenTelemetry migration
- tracing_config: Change with_insecure to insecure, default to false.
- tracing_config: Call SetDirectory to make TLS certificates relative to the Prometheus
configuration
- documentation: Change bool to boolean in the configuration
- documentation: document type float
- tracing: Always restart the tracing manager when TLS config is set to
reload certificates
- tracing: Always set TLS config, which could be used e.g. in case of
potential redirects.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>\\
* Allow escaping a dollar sign when expanding external labels
There is currently no mechanism to natively escape a dollar sign
in the os.Expand function. As a workaround, this commit modifies
the external label expansion logic to treat a double dollar ($$)
as a mechanism for escaping the dollar character.
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
Split out the note about regex-matching into a separate paragraph to
make it more obvious. Move it up, closer to the definition.
Signed-off-by: SuperQ <superq@gmail.com>
This follows the line of argument that the invariant of not looking
ahead of the query time was merely emerging behavior and not a
documented stable feature. Any query that looks ahead of the query
time was simply invalid before the introduction of the negative offset
and the @ modifier.
Signed-off-by: beorn7 <beorn@grafana.com>
Following the argument that breaking the invariant that PromQL does
not look ahead of the evaluation time implies a breaking change, we
still need to keep the feature flag around, but at least we can
communicate that the feature is considered stable, and that the
feature flags will be ignored from v3 on.
Signed-off-by: beorn7 <beorn@grafana.com>
Since `/api/v1/write` is a mutating endpoint, we should still activate
the remote-write-receiver explicitly. But we should do it in the same
way as the other mutating endpoints, i.e. via a flag
`--web.enable-remote-write-receiver`.
This commit marks the feature flag as deprecated, i.e. it still works
but logs a warning on startup. This enables users to seamlessly
migrate. With the next minor release, we can start ignoring the
feature flag (but still warn a user that is trying to use it).
Signed-off-by: beorn7 <beorn@grafana.com>
This commit adds support for discovering targets from the same
Kubernetes namespace as the Prometheus pod itself. Own-namespace
discovery can be indicated by using "." as the namespace.
Fixes#9782
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
When using Kubernetes on cloud providers, nodes will have the
spec.providerID field populated to contain the cloud provider specific
name of the EC2/GCE/... instance.
Let's expose this information as an additional label, so that it's
easier to annotate metrics and alerts to contain the cloud provider
specific name of the instance to which it pertains.
Signed-off-by: Ed Schouten <eschouten@apple.com>
This can be useful when generating rules, a query may use a duration,
and it may be useful to template that into a URL parameter. Therefore
this allows interfacing with systems that don't implement Prometheus
style duration parsing.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
* remote-write: slow down retries to avoid DDOS
Increase the default max retry time from 100ms to 5 seconds.
Remote write calls are retried after a recoverable error such as the
back-end returning 500. Prometheus waits the minimum time and retries,
then doubles the wait on each subsequent retry until the maximum is
reached.
If some data is still getting through, remote-write will also increase
shards, and the default maximum is 200. 200 shards sending every 100ms
is 20 calls per second, to a back-end that is already in trouble.
5 seconds was chosen to match the default BatchSendDeadline: if we can
afford to wait that long for no response, then we can wait the same time
to retry. We will reach 5 seconds after 9 successive failures.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update config doc for max_backoff change
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add a feature flag to enable the new manager
This PR creates a copy of the legacy manager and uses it by default.
It is a companion PR to #9349. With this PR, users can enable the new
discovery manager and provide us with any feedback / side effects that
the new behaviour might have.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
We have been Puppet user for 10 years and we are users of
https://github.com/camptocamp/prometheus-puppetdb-sd
However, that file_sd implementation contains business logic and
assumptions around e.g. the modules which you are using.
This pull request adds a simple PuppetDB service discovery, which will
enable more use cases than the upstream sd.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This adds a new metric exposing per target scrape sample_limit value. Metrics are only exposed if extra-scrape-metrics feature flag is enabled.
scrape_sample_limit will make it easy to monitor and alert on targets getting close to configured sample_limit, which is important given than exceeding sample_limit results in the entire scrape results being rejected.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Add a new built-in metric `scrape_timeout_seconds` to allow monitoring
of the ratio of scrape duration to the scrape timeout. Hide behind a
feature flag to avoid additional cardinality by default.
Signed-off-by: SuperQ <superq@gmail.com>
* PromQL: Fix start and end keywords masking label and metric names
This commit fixes an issue with the "at modifier" that introduced two
new keywords: `start` and `end`. In grouping options and in metric
names, these keywords took precedence over metric or label names, so
that those metrics and labels could no longer be referenced.
Signed-off-by: Clayton Peters <clayton.peters@man.com>
* Add in additional tests for metrics and/or labels called start/end.
Signed-off-by: Clayton Peters <clayton.peters@man.com>
* *: Cut 2.29.0-rc.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* VERSION: bump to 2.29.0-rc.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Remove experimental wording on size-based retention
Followup of #9004
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix PR reference in changelog
Signed-off-by: George Brighton <george@gebn.co.uk>
* Describe EC2 availability zone IDs at most once per refresh (#9142)
Signed-off-by: George Brighton <george@gebn.co.uk>
* Describe EC2 availability zones at most once per SD load
Closes#9142.
Signed-off-by: George Brighton <george@gebn.co.uk>
* Incorporate feedback
Signed-off-by: George Brighton <george@gebn.co.uk>
* Integrate feedback
Signed-off-by: George Brighton <george@gebn.co.uk>
* Add a compatibility note for macOS users.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* *: Cut v2.29.0-rc.1
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Fix `kuma_sd` targetgroup reporting (#9157)
* Bundle all xDS targets into a single group
Signed-off-by: austin ce <austin.cawley@gmail.com>
* *: cut v2.29.0-rc.2
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Rename links
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* bump codemirror-promql to 0.17.0
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* *: cut v2.29.0
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* tsdb: align atomically accessed int64 (#9192)
This prevents a panic in 32-bit archs:
https://pkg.go.dev/sync/atomic#pkg-note-BUGFixed#9190
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Release 2.29.1 (#9193)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Co-authored-by: Clayton Peters <clayton.peters@man.com>
Co-authored-by: Frederic Branczyk <fbranczyk@gmail.com>
Co-authored-by: George Brighton <george@gebn.co.uk>
Co-authored-by: Austin Cawley-Edwards <austin.cawley@gmail.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Augustin Husson <husson.augustin@gmail.com>