If a (float or integer) histogram is a gauge histogram, set the
CounterResetHint accordingly. (The default value is fine for the
normal counter histograms.)
Signed-off-by: beorn7 <beorn@grafana.com>
This is to check if a gauge histogram can be appended to the given chunk.
If not, it tells what changes to make to the chunk and the histogram
if possible.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Update docs example rules for default config
The prometheus download includes a default config to scrape itself.
This self-scraping prometheus doesn't include any metric named as
`http_inprogress_requests`, but does include one named
`prometheus_http_requests_total`.
Updating this example rule in the docs to one which can be used
out-of-the-box with the default download would be a nice improvement.
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
* Update syntax as per @LeviHarrison's review
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Signed-off-by: Sam Jewell <sam.jewell@grafana.com>
Signed-off-by: Sam Jewell <2903904+samjewell@users.noreply.github.com>
Co-authored-by: Levi Harrison <levisamuelharrison@gmail.com>
Since the struct defines proxy_connect_header instead of proxy_connect_headers, all relevant occurences of it were replaced with the correct configuration name as defined in the HTTPClientConfig struct.
Signed-off-by: Robbe Haesendonck <googleit@inuits.eu>
With this commit, the parser stops to see a gauge histogram (whether
native or conventional) as an unexpected metric type. It ingests it
normally, it even sets the `GaugeHistogram` type in the metadata (as
it has already done for a conventional gauge histogram scraped using
OpenMetrics), but it otherwise treats it as a normal counter-like
histogram.
Once #11783 is merged, though, it should be very easy to utilize the
type information.
Signed-off-by: beorn7 <beorn@grafana.com>
So far, the parser hasn't validated that the type is valid in the
`Next()` call. Later, in the `Series()` call, however, it assumes that
we will only see valid types and therefore panics with `encountered
unexpected metric type, this is a bug`.
This commit fixes said bug by adding validation to the `Next()` call.
Signed-off-by: beorn7 <beorn@grafana.com>
In the head and in v1 postings on disk, it makes no difference whether
postings are sorted. Only for v2 does the code step through in order.
So, move the sorting to where it is required, and thus skip it entirely
in the head.
Label values in on-disk blocks are already sorted, but `slices.Sort` is
very fast on already-sorted data so we don't bother checking.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This allocates memory for all the returned values, which skews the
result. We aren't trying to benchmark `ExpandPostings`, so just step
through all the values without storing them to consume them.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Previously all the postings constructed were consumed on the first
iteration, so subsequent iterations did no work.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In some cases, the Prometheus HTTP format parser was not returning the
right token in the error output which made debugging impossible.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
We have a LabelBuilder in EvalNodeHelper; use it instead of creating a new one at every step.
Need to take some care that different uses of enh.lb do not overlap.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add API endpoints for getting scrape pool names
This adds api/v1/scrape_pools endpoint that returns the list of *names* of all the scrape pools configured.
Having it allows to find out what scrape pools are defined without having to list and parse all targets.
The second change is adding scrapePool query parameter support in api/v1/targets endpoint, that allows to
filter returned targets by only finding ones for passed scrape pool name.
Both changes allow to query for a specific scrape pool data, rather than getting all the targets for all possible scrape pools.
The problem with api/v1/targets endpoint is that it returns huge amount of data if you configure a lot of scrape pools.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Add a scrape pool selector on /targets page
Current targets page lists all possible targets. This works great if you only have a few scrape pools configured,
but for systems with a lot of scrape pools and targets this slow things down a lot.
Not only does the /targets page load very slowly in such case (waiting for huge API response) but it also take
a long time to render, due to huge number of elements.
This change adds a dropdown selector so it's possible to select only intersting scrape pool to view.
There's also scrapePool query param that will open selected pool automatically.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>