Update “conventional histogram” → “classic histogram”

Signed-off-by: beorn7 <beorn@grafana.com>
This commit is contained in:
beorn7 2023-11-29 15:22:58 +01:00
parent f216ddadbc
commit 0eb0ca42c5
6 changed files with 30 additions and 31 deletions

View file

@ -119,20 +119,19 @@ also experimental) protobuf parser, through which _all_ metrics are ingested
(i.e. not only native histograms). Prometheus will try to negotiate the
protobuf format first. The instrumented target needs to support the protobuf
format, too, _and_ it needs to expose native histograms. The protobuf format
allows to expose conventional and native histograms side by side. With this
feature flag disabled, Prometheus will continue to parse the conventional
histogram (albeit via the text format). With this flag enabled, Prometheus will
still ingest those conventional histograms that do not come with a
corresponding native histogram. However, if a native histogram is present,
Prometheus will ignore the corresponding conventional histogram, with the
notable exception of exemplars, which are always ingested. To keep the
conventional histograms as well, enable `scrape_classic_histograms` in the
scrape job.
allows to expose classic and native histograms side by side. With this feature
flag disabled, Prometheus will continue to parse the classic histogram (albeit
via the text format). With this flag enabled, Prometheus will still ingest
those classic histograms that do not come with a corresponding native
histogram. However, if a native histogram is present, Prometheus will ignore
the corresponding classic histogram, with the notable exception of exemplars,
which are always ingested. To keep the classic histograms as well, enable
`scrape_classic_histograms` in the scrape job.
_Note about the format of `le` and `quantile` label values:_
In certain situations, the protobuf parsing changes the number formatting of
the `le` labels of conventional histograms and the `quantile` labels of
the `le` labels of classic histograms and the `quantile` labels of
summaries. Typically, this happens if the scraped target is instrumented with
[client_golang](https://github.com/prometheus/client_golang) provided that
[promhttp.HandlerOpts.EnableOpenMetrics](https://pkg.go.dev/github.com/prometheus/client_golang/prometheus/promhttp#HandlerOpts)

View file

@ -238,23 +238,23 @@ boundaries are inclusive or exclusive.
## `histogram_quantile()`
`histogram_quantile(φ scalar, b instant-vector)` calculates the φ-quantile (0 ≤
φ ≤ 1) from a [conventional
φ ≤ 1) from a [classic
histogram](https://prometheus.io/docs/concepts/metric_types/#histogram) or from
a native histogram. (See [histograms and
summaries](https://prometheus.io/docs/practices/histograms) for a detailed
explanation of φ-quantiles and the usage of the (conventional) histogram metric
explanation of φ-quantiles and the usage of the (classic) histogram metric
type in general.)
_Note that native histograms are an experimental feature. The behavior of this
function when dealing with native histograms may change in future versions of
Prometheus._
The conventional float samples in `b` are considered the counts of observations
in each bucket of one or more conventional histograms. Each float sample must
have a label `le` where the label value denotes the inclusive upper bound of
the bucket. (Float samples without such a label are silently ignored.) The
other labels and the metric name are used to identify the buckets belonging to
each conventional histogram. The [histogram metric
The float samples in `b` are considered the counts of observations in each
bucket of one or more classic histograms. Each float sample must have a label
`le` where the label value denotes the inclusive upper bound of the bucket.
(Float samples without such a label are silently ignored.) The other labels and
the metric name are used to identify the buckets belonging to each classic
histogram. The [histogram metric
type](https://prometheus.io/docs/concepts/metric_types/#histogram)
automatically provides time series with the `_bucket` suffix and the
appropriate labels.
@ -262,17 +262,17 @@ appropriate labels.
The native histogram samples in `b` are treated each individually as a separate
histogram to calculate the quantile from.
As long as no naming collisions arise, `b` may contain a mix of conventional
As long as no naming collisions arise, `b` may contain a mix of classic
and native histograms.
Use the `rate()` function to specify the time window for the quantile
calculation.
Example: A histogram metric is called `http_request_duration_seconds` (and
therefore the metric name for the buckets of a conventional histogram is
therefore the metric name for the buckets of a classic histogram is
`http_request_duration_seconds_bucket`). To calculate the 90th percentile of request
durations over the last 10m, use the following expression in case
`http_request_duration_seconds` is a conventional histogram:
`http_request_duration_seconds` is a classic histogram:
histogram_quantile(0.9, rate(http_request_duration_seconds_bucket[10m]))
@ -283,9 +283,9 @@ For a native histogram, use the following expression instead:
The quantile is calculated for each label combination in
`http_request_duration_seconds`. To aggregate, use the `sum()` aggregator
around the `rate()` function. Since the `le` label is required by
`histogram_quantile()` to deal with conventional histograms, it has to be
`histogram_quantile()` to deal with classic histograms, it has to be
included in the `by` clause. The following expression aggregates the 90th
percentile by `job` for conventional histograms:
percentile by `job` for classic histograms:
histogram_quantile(0.9, sum by (job, le) (rate(http_request_duration_seconds_bucket[10m])))
@ -293,7 +293,7 @@ When aggregating native histograms, the expression simplifies to:
histogram_quantile(0.9, sum by (job) (rate(http_request_duration_seconds[10m])))
To aggregate all conventional histograms, specify only the `le` label:
To aggregate all classic histograms, specify only the `le` label:
histogram_quantile(0.9, sum by (le) (rate(http_request_duration_seconds_bucket[10m])))
@ -307,7 +307,7 @@ assuming a linear distribution within a bucket.
If `b` has 0 observations, `NaN` is returned. For φ < 0, `-Inf` is
returned. For φ > 1, `+Inf` is returned. For φ = `NaN`, `NaN` is returned.
The following is only relevant for conventional histograms: If `b` contains
The following is only relevant for classic histograms: If `b` contains
fewer than two buckets, `NaN` is returned. The highest bucket must have an
upper bound of `+Inf`. (Otherwise, `NaN` is returned.) If a quantile is located
in the highest bucket, the upper bound of the second highest bucket is

View file

@ -411,7 +411,7 @@ type Histogram struct {
SampleCount uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount,proto3" json:"sample_count,omitempty"`
SampleCountFloat float64 `protobuf:"fixed64,4,opt,name=sample_count_float,json=sampleCountFloat,proto3" json:"sample_count_float,omitempty"`
SampleSum float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum,proto3" json:"sample_sum,omitempty"`
// Buckets for the conventional histogram.
// Buckets for the classic histogram.
Bucket []Bucket `protobuf:"bytes,3,rep,name=bucket,proto3" json:"bucket"`
CreatedTimestamp *types.Timestamp `protobuf:"bytes,15,opt,name=created_timestamp,json=createdTimestamp,proto3" json:"created_timestamp,omitempty"`
// schema defines the bucket schema. Currently, valid numbers are -4 <= n <= 8.

View file

@ -76,7 +76,7 @@ message Histogram {
uint64 sample_count = 1;
double sample_count_float = 4; // Overrides sample_count if > 0.
double sample_sum = 2;
// Buckets for the conventional histogram.
// Buckets for the classic histogram.
repeated Bucket bucket = 3 [(gogoproto.nullable) = false]; // Ordered in increasing order of upper_bound, +Inf bucket is optional.
google.protobuf.Timestamp created_timestamp = 15;

View file

@ -1074,7 +1074,7 @@ type EvalNodeHelper struct {
// Caches.
// DropMetricName and label_*.
Dmn map[uint64]labels.Labels
// funcHistogramQuantile for conventional histograms.
// funcHistogramQuantile for classic histograms.
signatureToMetricWithBuckets map[string]*metricWithBuckets
// label_replace.
regex *regexp.Regexp

View file

@ -1176,7 +1176,7 @@ func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *Ev
var histogramSamples []Sample
for _, sample := range inVec {
// We are only looking for conventional buckets here. Remember
// We are only looking for classic buckets here. Remember
// the histograms for later treatment.
if sample.H != nil {
histogramSamples = append(histogramSamples, sample)
@ -1207,10 +1207,10 @@ func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *Ev
// Now deal with the histograms.
for _, sample := range histogramSamples {
// We have to reconstruct the exact same signature as above for
// a conventional histogram, just ignoring any le label.
// a classic histogram, just ignoring any le label.
enh.lblBuf = sample.Metric.Bytes(enh.lblBuf)
if mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]; ok && len(mb.buckets) > 0 {
// At this data point, we have conventional histogram
// At this data point, we have classic histogram
// buckets and a native histogram with the same name and
// labels. Do not evaluate anything.
annos.Add(annotations.NewMixedClassicNativeHistogramsWarning(sample.Metric.Get(labels.MetricName), args[1].PositionRange()))