Currently Prometheus will always request gzip compression from the target when sending scrape requests.
HTTP compression does reduce the amount of bytes sent over the wire and so is often desirable.
The downside of compression is that it requires extra resources - cpu & memory.
This also affects the resource usage on the target since it has to compress the response
before sending it to Prometheus.
This change adds a new option to the scrape job configuration block: enable_compression.
The default is true so it remains the same as current Prometheus behaviour.
Setting this option to false allows users to disable compression between Prometheus
and the scraped target, which will require more bandwidth but it lowers the resource
usage of both Prometheus and the target.
Fixes#12319.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Added ability to specify scrape protocols to accept during HTTP content type negotiation.
This is done via new option in GlobalConfig and ScrapeConfig: "scrape_protocol"
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Fixed readability and log message.
Signed-off-by: bwplotka <bwplotka@gmail.com>
---------
Signed-off-by: bwplotka <bwplotka@gmail.com>
Addresses: #12536
This commit adds support for configuring sigv4 to an
`alertmanager_config`. Based heavily on the sigv4 work in the remote
write client.
Signed-off-by: TJ Hoplock <t.hoplock@gmail.com>
It's possible (quite common on Kubernetes) to have a service discovery
return thousands of targets then drop most of them in relabel rules.
The main place this data is used is to display in the web UI, where
you don't want thousands of lines of display.
The new limit is `keep_dropped_targets`, which defaults to 0
for backwards-compatibility.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Adds a new label to include the ID of the image that an instance is
using. This can be used for example to filter a job to only include
instances using a certain image as that image includes some exporter.
Sometimes the image information isn't available, such as when the image
is private and the user doesn't have the roles required to see it. In
those cases we just don't set the label, as the rest of the information
from the discovery provider can still be used.
Signed-off-by: Taavi Väänänen <hi@taavi.wtf>
So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.
This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.
Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:
1. Series created from classic _gauge_ histograms didn't get the
_sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.
Signed-off-by: beorn7 <beorn@grafana.com>
Since the struct defines proxy_connect_header instead of proxy_connect_headers, all relevant occurences of it were replaced with the correct configuration name as defined in the HTTPClientConfig struct.
Signed-off-by: Robbe Haesendonck <googleit@inuits.eu>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Common client in EC2 and Lightsail
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Azure -> AWS
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
This PR updates the Serverset url at the configuration.md documentation.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Currently, it's not explicitly called out which permissions are needed
for service discovery of EC2 instances. It's not super hard to figure
out, but it did involve a bit of guesswork when I did it yesterday, and
I figure it's worth saving people that effort.
I've also seen examples around the internet where people are granting
much broader permissions than they need to, so maybe we can save on that
by explicitly saying what's needed.
Signed-off-by: Chris Sinjakli <chris@sinjakli.co.uk>
It's currently possible to use blackbox_exporter to probe MX records
themselves. However it's not possible to do an end-to-end test, like is
possible with SRV records. This makes it possible to use MX records as a
source of hostnames in the same way as SRV records.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
* add description for __meta_kubernetes_endpoints_label_* and __meta_kubernetes_endpoints_labelpresent_*
Signed-off-by: renzheng.wang <wangrzneu@gmail.com>
The Kubernetes service discovery can only add node labels to
targets from the pod role.
This commit extends this functionality to the endpoints and
endpointslices roles.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Update the wording of the `labelmap` relabel action to make it more
clear that it acts on all the label names, rather than the list provided
by source_labels.
Signed-off-by: SuperQ <superq@gmail.com>
* Fix documentation for Docker API filters
Signed-off-by: Robert Jacob <xperimental@solidproject.de>
* Undo indentation change
Signed-off-by: Robert Jacob <xperimental@solidproject.de>
* Followup on OpenTelemetry migration
- tracing_config: Change with_insecure to insecure, default to false.
- tracing_config: Call SetDirectory to make TLS certificates relative to the Prometheus
configuration
- documentation: Change bool to boolean in the configuration
- documentation: document type float
- tracing: Always restart the tracing manager when TLS config is set to
reload certificates
- tracing: Always set TLS config, which could be used e.g. in case of
potential redirects.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>\\
This commit adds support for discovering targets from the same
Kubernetes namespace as the Prometheus pod itself. Own-namespace
discovery can be indicated by using "." as the namespace.
Fixes#9782
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
When using Kubernetes on cloud providers, nodes will have the
spec.providerID field populated to contain the cloud provider specific
name of the EC2/GCE/... instance.
Let's expose this information as an additional label, so that it's
easier to annotate metrics and alerts to contain the cloud provider
specific name of the instance to which it pertains.
Signed-off-by: Ed Schouten <eschouten@apple.com>
* remote-write: slow down retries to avoid DDOS
Increase the default max retry time from 100ms to 5 seconds.
Remote write calls are retried after a recoverable error such as the
back-end returning 500. Prometheus waits the minimum time and retries,
then doubles the wait on each subsequent retry until the maximum is
reached.
If some data is still getting through, remote-write will also increase
shards, and the default maximum is 200. 200 shards sending every 100ms
is 20 calls per second, to a back-end that is already in trouble.
5 seconds was chosen to match the default BatchSendDeadline: if we can
afford to wait that long for no response, then we can wait the same time
to retry. We will reach 5 seconds after 9 successive failures.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update config doc for max_backoff change
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>