This is to check if a gauge histogram can be appended to the given chunk.
If not, it tells what changes to make to the chunk and the histogram
if possible.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Update docs example rules for default config
The prometheus download includes a default config to scrape itself.
This self-scraping prometheus doesn't include any metric named as
`http_inprogress_requests`, but does include one named
`prometheus_http_requests_total`.
Updating this example rule in the docs to one which can be used
out-of-the-box with the default download would be a nice improvement.
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
* Update syntax as per @LeviHarrison's review
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Since the struct defines proxy_connect_header instead of proxy_connect_headers, all relevant occurences of it were replaced with the correct configuration name as defined in the HTTPClientConfig struct.
Signed-off-by: Robbe Haesendonck <googleit@inuits.eu>
In the head and in v1 postings on disk, it makes no difference whether
postings are sorted. Only for v2 does the code step through in order.
So, move the sorting to where it is required, and thus skip it entirely
in the head.
Label values in on-disk blocks are already sorted, but `slices.Sort` is
very fast on already-sorted data so we don't bother checking.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This allocates memory for all the returned values, which skews the
result. We aren't trying to benchmark `ExpandPostings`, so just step
through all the values without storing them to consume them.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Previously all the postings constructed were consumed on the first
iteration, so subsequent iterations did no work.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In some cases, the Prometheus HTTP format parser was not returning the
right token in the error output which made debugging impossible.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
We have a LabelBuilder in EvalNodeHelper; use it instead of creating a new one at every step.
Need to take some care that different uses of enh.lb do not overlap.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add API endpoints for getting scrape pool names
This adds api/v1/scrape_pools endpoint that returns the list of *names* of all the scrape pools configured.
Having it allows to find out what scrape pools are defined without having to list and parse all targets.
The second change is adding scrapePool query parameter support in api/v1/targets endpoint, that allows to
filter returned targets by only finding ones for passed scrape pool name.
Both changes allow to query for a specific scrape pool data, rather than getting all the targets for all possible scrape pools.
The problem with api/v1/targets endpoint is that it returns huge amount of data if you configure a lot of scrape pools.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Add a scrape pool selector on /targets page
Current targets page lists all possible targets. This works great if you only have a few scrape pools configured,
but for systems with a lot of scrape pools and targets this slow things down a lot.
Not only does the /targets page load very slowly in such case (waiting for huge API response) but it also take
a long time to render, due to huge number of elements.
This change adds a dropdown selector so it's possible to select only intersting scrape pool to view.
There's also scrapePool query param that will open selected pool automatically.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
While originally the resync period also forced refreshing from Kubernetes API server, this has been removed for some years now because watching the API server got more stable [1]. Today, this just results in all entities being sent to the service discovery again, which is valid from a general Prometheus perspective, but results in unnecessary CPU load and also breaks service discovery metrics. In especially, this makes monitoring "do we actually observe changes from Kubernetes API server" impossible (receiving constant updates from Kubernetes service discovery is a pretty valid assumption, for example nodes get frequent status updates, ...).
Signed-off-by: Jens Erat <jens.erat@mercedes-benz.com>