prometheus/promql/functions.go
Nicolás Pazos aa3513fc89
remote write 2.0: sync with main branch (#13510)
* consoles: exclude iowait and steal from CPU Utilisation

'iowait' and 'steal' indicate specific idle/wait states, which shouldn't
be counted into CPU Utilisation. Also see
https://github.com/prometheus-operator/kube-prometheus/pull/796 and
https://github.com/kubernetes-monitoring/kubernetes-mixin/pull/667.

Per the iostat man page:

%idle
    Show the percentage of time that the CPU or CPUs were idle and the
    system did not have an outstanding disk I/O request.

%iowait
     Show the percentage of time that the CPU or CPUs were idle during
     which the system had an outstanding disk I/O request.

%steal
     Show the percentage of time spent in involuntary wait by the
     virtual CPU or CPUs while the hypervisor was servicing another
     virtual processor.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>

* tsdb: shrink txRing with smaller integers

4 billion active transactions ought to be enough for anyone.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* tsdb: create isolation transaction slice on demand

When Prometheus restarts it creates every series read in from the WAL,
but many of those series will be finished, and never receive any more
samples. By defering allocation of the txRing slice to when it is first
needed, we save 32 bytes per stale series.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* add cluster variable to Overview dashboard

Signed-off-by: Erik Sommer <ersotech@posteo.de>

* promql: simplify Native Histogram arithmetics

Signed-off-by: Linas Medziunas <linas.medziunas@gmail.com>

* Cut 2.49.0-rc.0 (#13270)

* Cut 2.49.0-rc.0

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Removed the duplicate.

Signed-off-by: bwplotka <bwplotka@gmail.com>

---------

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Add unit protobuf parser

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* Go on adding protobuf parsing for unit

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* ui: create a reproduction for https://github.com/prometheus/prometheus/issues/13292

Signed-off-by: machine424 <ayoubmrini424@gmail.com>

* Get conditional right

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* Get VM Scale Set NIC (#13283)

Calling `*armnetwork.InterfacesClient.Get()` doesn't work for Scale Set
VM NIC, because these use a different Resource ID format.

Use `*armnetwork.InterfacesClient.GetVirtualMachineScaleSetNetworkInterface()`
instead.  This needs both the scale set name and the instance ID, so
add an `InstanceID` field to the `virtualMachine` struct.  `InstanceID`
is empty for a VM that isn't a ScaleSetVM.

Signed-off-by: Daniel Nicholls <daniel.nicholls@resdiary.com>

* Cut v2.49.0-rc.1

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Delete debugging lines, amend error message for unit

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* Correct order in error message

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* Consider storage.ErrTooOldSample as non-retryable

Signed-off-by: Daniel Kerbel <nmdanny@gmail.com>

* scrape_test.go: Increase scrape interval in TestScrapeLoopCache to reduce potential flakiness

Signed-off-by: machine424 <ayoubmrini424@gmail.com>

* Avoid creating string for suffix, consider counters without _total suffix

Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>

* build(deps): bump github.com/prometheus/client_golang

Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* build(deps): bump actions/setup-node from 3.8.1 to 4.0.1

Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3.8.1 to 4.0.1.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](5e21ff4d9b...b39b52d121)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* scripts: sort file list in embed directive

Otherwise the resulting string depends on find, which afaict depends on
the underlying filesystem. A stable file list make it easier to detect
UI changes in downstreams that need to track UI assets.

Signed-off-by: Jan Fajerski <jfajersk@redhat.com>

* Fix DataTableProps['data'] for resultType string

Signed-off-by: Kevin Mingtarja <kevin.mingtarja@gmail.com>

* Fix handling of scalar and string in isHeatmapData

Signed-off-by: Kevin Mingtarja <kevin.mingtarja@gmail.com>

* build(deps): bump github.com/influxdata/influxdb

Bumps [github.com/influxdata/influxdb](https://github.com/influxdata/influxdb) from 1.11.2 to 1.11.4.
- [Release notes](https://github.com/influxdata/influxdb/releases)
- [Commits](https://github.com/influxdata/influxdb/compare/v1.11.2...v1.11.4)

---
updated-dependencies:
- dependency-name: github.com/influxdata/influxdb
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* build(deps): bump github.com/prometheus/prometheus

Bumps [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus) from 0.48.0 to 0.48.1.
- [Release notes](https://github.com/prometheus/prometheus/releases)
- [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/prometheus/compare/v0.48.0...v0.48.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/prometheus
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump client_golang to v1.18.0 (#13373)

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Drop old inmemory samples (#13002)

* Drop old inmemory samples

Co-authored-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Avoid copying timeseries when the feature is disabled

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Run gofmt

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Clarify docs

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Add more logging info

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Remove loggers

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* optimize function and add tests

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Simplify filter

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* rename var

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Update help info from metrics

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* use metrics to keep track of drop elements during buildWriteRequest

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* rename var in tests

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* pass time.Now as parameter

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Change buildwriterequest during retries

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Revert "Remove loggers"

This reverts commit 54f91dfcae20488944162335ab4ad8be459df1ab.

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* use log level debug for loggers

Signed-off-by: Marc Tuduri <marctc@protonmail.com>

* Fix linter

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Remove noisy debug-level logs; add 'reason' label to drop metrics

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Remove accidentally committed files

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Propagate logger to buildWriteRequest to log dropped data

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Fix docs comment

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Make drop reason more specific

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Remove unnecessary pass of logger

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Use snake_case for reason label

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

* Fix dropped samples metric

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

---------

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Marc Tuduri <marctc@protonmail.com>
Signed-off-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>
Co-authored-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Co-authored-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>

* fix(discovery): allow requireUpdate util to timeout in discovery/file/file_test.go.

The loop ran indefinitely if the condition isn't met.

Before, each iteration created a new timer channel which was always outpaced by
the other timer channel with smaller duration.

minor detail: There was a memory leak: resources of the ~10 previous timers were
constantly kept. With the fix, we may keep the resources of one timer around for defaultWait
but this isn't worth the changes to make it right.

Signed-off-by: machine424 <ayoubmrini424@gmail.com>

* Merge pull request #13371 from kevinmingtarja/fix-isHeatmapData

ui: fix handling of scalar and string in isHeatmapData

* tsdb/{index,compact}: allow using custom postings encoding format (#13242)

* tsdb/{index,compact}: allow using custom postings encoding format

We would like to experiment with a different postings encoding format in
Thanos so in this change I am proposing adding another argument to
`NewWriter` which would allow users to change the format if needed.
Also, wire the leveled compactor so that it would be possible to change
the format there too.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* tsdb/compact: use a struct for leveled compactor options

As discussed on Slack, let's use a struct for the options in leveled
compactor.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* tsdb: make changes after Bryan's review

- Make changes less intrusive
- Turn the postings encoder type into a function
- Add NewWriterWithEncoder()

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

---------

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* Cut 2.49.0-rc.2

Signed-off-by: bwplotka <bwplotka@gmail.com>

* build(deps): bump actions/setup-go from 3.5.0 to 5.0.0 in /scripts (#13362)

Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3.5.0 to 5.0.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](6edd4406fa...0c52d547c9)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump github/codeql-action from 2.22.8 to 3.22.12 (#13358)

Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2.22.8 to 3.22.12.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](407ffafae6...012739e508)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* put @nexucis has a release shepherd (#13383)

Signed-off-by: Augustin Husson <augustin.husson@amadeus.com>

* Add analyze histograms command to promtool (#12331)

Add `query analyze` command to promtool

This command analyzes the buckets of classic and native histograms,
based on data queried from the Prometheus query API, i.e. it
doesn't require direct access to the TSDB files.

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>

---------

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>

* included instance in all necessary descriptions

Signed-off-by: Erik Sommer <ersotech@posteo.de>

* tsdb/compact: fix passing merge func

Fixing a very small logical problem I've introduced :(.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* tsdb: add enable overlapping compaction

This functionality is needed in downstream projects because they have a
separate component that does compaction.

Upstreaming
7c8e9a2a76/tsdb/compact.go (L323-L325).

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* Cut 2.49.0

Signed-off-by: bwplotka <bwplotka@gmail.com>

* promtool: allow setting multiple matchers to "promtool tsdb dump" command. (#13296)

Conditions are ANDed inside the same matcher but matchers are ORed

Including unit tests for "promtool tsdb dump".

Refactor some matchers scraping utils.

Signed-off-by: machine424 <ayoubmrini424@gmail.com>

* Fixed changelog

Signed-off-by: bwplotka <bwplotka@gmail.com>

* tsdb/main: wire "EnableOverlappingCompaction" to tsdb.Options (#13398)

This added the https://github.com/prometheus/prometheus/pull/13393
"EnableOverlappingCompaction" parameter to the compactor code but not to
the tsdb.Options. I forgot about that. Add it to `tsdb.Options` too and
set it to `true` in Prometheus.

Copy/paste the description from
https://github.com/prometheus/prometheus/pull/13393#issuecomment-1891787986

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>

* Issue #13268: fix quality value in accept header

Signed-off-by: Kumar Kalpadiptya Roy <kalpadiptya.roy@outlook.com>

* Cut 2.49.1 with scrape q= bugfix.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Cut 2.49.1 web package.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Restore more efficient version of NewPossibleNonCounterInfo annotation (#13022)

Restore more efficient version of NewPossibleNonCounterInfo annotation

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>

---------

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>

* Fix regressions introduced by #13242

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* fix slice copy in 1.20 (#13389)

The slices package is added to the standard library in Go 1.21;
we need to import from the exp area to maintain compatibility with Go 1.20.

Signed-off-by: tyltr <tylitianrui@126.com>

* Docs: Query Basics: link to rate (#10538)

Co-authored-by: Julien Pivotto <roidelapluie@o11y.eu>

* chore(kubernetes): check preconditions earlier and avoid unnecessary checks or iterations

Signed-off-by: machine424 <ayoubmrini424@gmail.com>

* Examples: link to `rate` for new users (#10535)

* Examples: link to `rate` for new users

Signed-off-by: Ted Robertson 10043369+tredondo@users.noreply.github.com
Co-authored-by: Bryan Boreham <bjboreham@gmail.com>

* promql: use natural sort in sort_by_label and sort_by_label_desc (#13411)

These functions are intended for humans, as robots can already sort the results
however they please. Humans like things sorted "naturally":

* https://blog.codinghorror.com/sorting-for-humans-natural-sort-order/

A similar thing has been done to Grafana, which is also used by humans:

* https://github.com/grafana/grafana/pull/78024
* https://github.com/grafana/grafana/pull/78494

Signed-off-by: Ivan Babrou <github@ivan.computer>

* TestLabelValuesWithMatchers: Add test case

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

* remove  obsolete build tag

Signed-off-by: tyltr <tylitianrui@126.com>

* Upgrade some golang dependencies for resty 2.11

Signed-off-by: Israel Blancas <iblancasa@gmail.com>

* Native Histograms: support `native_histogram_min_bucket_factor` in scrape_config (#13222)

Native Histograms: support native_histogram_min_bucket_factor in scrape_config

---------

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>

* Add warnings for histogramRate applied with isCounter not matching counter/gauge histogram (#13392)

Add warnings for histogramRate applied with isCounter not matching counter/gauge histogram

---------

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>

* Minor fixes to otlp vendor update script

Signed-off-by: Goutham <gouthamve@gmail.com>

* build(deps): bump github.com/hetznercloud/hcloud-go/v2

Bumps [github.com/hetznercloud/hcloud-go/v2](https://github.com/hetznercloud/hcloud-go) from 2.4.0 to 2.6.0.
- [Release notes](https://github.com/hetznercloud/hcloud-go/releases)
- [Changelog](https://github.com/hetznercloud/hcloud-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/hetznercloud/hcloud-go/compare/v2.4.0...v2.6.0)

---
updated-dependencies:
- dependency-name: github.com/hetznercloud/hcloud-go/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Enhanced visibility for `promtool test rules` with JSON colored formatting (#13342)

* Added diff flag for unit test to improvise readability & debugging

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Removed blank spaces

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Fixed linting error

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Added cli flags to documentation

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Revert unrrelated linting fixes

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Fixed review suggestions

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Cleanup

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Updated flag description

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* Updated flag description

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

---------

Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>

* storage: skip merging when no remote storage configured

Prometheus is hard-coded to use a fanout storage between TSDB and
a remote storage which by default is empty.
This change detects the empty storage and skips merging between
result sets, which would make `Select()` sort results.

Bottom line: we skip a sort unless there really is some remote storage
configured.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Remove csmarchbanks from remote write owners (#13432)

I have not had the time to keep up with remote write and have no plans
to work on it in the near future so I am withdrawing my maintainership
of that part of the codebase. I continue to focus on client_python.

Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>

* add more context cancellation check at evaluation time

Signed-off-by: Ben Ye <benye@amazon.com>

* Optimize label values with matchers by taking shortcuts (#13426)

Don't calculate postings beforehand: we may not need them. If all
matchers are for the requested label, we can just filter its values.

Also, if there are no values at all, no need to run any kind of
logic.

Also add more labelValuesWithMatchers benchmarks

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>

* Add automatic memory limit handling

Enable automatic detection of memory limits and configure GOMEMLIMIT to
match.
* Also includes a flag to allow controlling the reserved ratio.

Signed-off-by: SuperQ <superq@gmail.com>

* Update OSSF badge link (#13433)

Provide a more user friendly interface

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>

* SD Managers taking over responsibility for registration of debug metrics (#13375)

SD Managers take over responsibility for SD metrics registration

---------

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: Björn Rabenstein <github@rabenste.in>

* Optimize histogram iterators (#13340)

Optimize histogram iterators

Histogram iterators allocate new objects in the AtHistogram and
AtFloatHistogram methods, which makes calculating rates over long
ranges expensive.

In #13215 we allowed an existing object to be reused
when converting an integer histogram to a float histogram. This commit follows
the same idea and allows injecting an existing object in the AtHistogram and
AtFloatHistogram methods. When the injected value is nil, iterators allocate
new histograms, otherwise they populate and return the injected object.

The commit also adds a CopyTo method to Histogram and FloatHistogram which
is used in the BufferedIterator to overwrite items in the ring instead of making
new copies.

Note that a specialized HPoint pool is needed for all of this to work 
(`matrixSelectorHPool`).

---------

Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>

* doc: Mark `mad_over_time` as experimental (#13440)

We forgot to do that in
https://github.com/prometheus/prometheus/pull/13059

Signed-off-by: beorn7 <beorn@grafana.com>

* Change metric label for Puppetdb from 'http' to 'puppetdb'

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>

* mirror metrics.proto change & generate code

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>

* TestHeadLabelValuesWithMatchers: Add test case (#13414)

Add test case to TestHeadLabelValuesWithMatchers, while fixing a couple
of typos in other test cases. Also enclosing some implicit sub-tests in a
`t.Run` call to make them explicitly sub-tests.

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

* update all go dependencies (#13438)

Signed-off-by: Augustin Husson <husson.augustin@gmail.com>

* build(deps): bump the k8s-io group with 2 updates (#13454)

Bumps the k8s-io group with 2 updates: [k8s.io/api](https://github.com/kubernetes/api) and [k8s.io/client-go](https://github.com/kubernetes/client-go).


Updates `k8s.io/api` from 0.28.4 to 0.29.1
- [Commits](https://github.com/kubernetes/api/compare/v0.28.4...v0.29.1)

Updates `k8s.io/client-go` from 0.28.4 to 0.29.1
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.28.4...v0.29.1)

---
updated-dependencies:
- dependency-name: k8s.io/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: k8s-io
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: k8s-io
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump the go-opentelemetry-io group with 1 update (#13453)

Bumps the go-opentelemetry-io group with 1 update: [go.opentelemetry.io/collector/semconv](https://github.com/open-telemetry/opentelemetry-collector).


Updates `go.opentelemetry.io/collector/semconv` from 0.92.0 to 0.93.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-collector/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-collector/blob/main/CHANGELOG-API.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-collector/compare/v0.92.0...v0.93.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/collector/semconv
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: go-opentelemetry-io
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump actions/upload-artifact from 3.1.3 to 4.0.0 (#13355)

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.3 to 4.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](a8a3f3ad30...c7d193f32e)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump bufbuild/buf-push-action (#13357)

Bumps [bufbuild/buf-push-action](https://github.com/bufbuild/buf-push-action) from 342fc4cdcf29115a01cf12a2c6dd6aac68dc51e1 to a654ff18effe4641ebea4a4ce242c49800728459.
- [Release notes](https://github.com/bufbuild/buf-push-action/releases)
- [Commits](342fc4cdcf...a654ff18ef)

---
updated-dependencies:
- dependency-name: bufbuild/buf-push-action
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Labels: Add DropMetricName function, used in PromQL (#13446)

This function is called very frequently when executing PromQL functions,
and we can do it much more efficiently inside Labels.

In the common case that `__name__` comes first in the labels, we simply
re-point to start at the next label, which is nearly free.

`DropMetricName` is now so cheap I removed the cache - benchmarks show
everything still goes faster.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* tsdb: simplify internal series delete function (#13261)

Lifting an optimisation from Agent code, `seriesHashmap.del` can use
the unique series reference, doesn't need to check Labels.
Also streamline the logic for deleting from `unique` and `conflicts` maps,
and add some comments to help the next person.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* otlptranslator/update-copy.sh: Fix sed command lines

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

* Rollback k8s.io requirements (#13462)

Rollback k8s.io Go modules to v0.28.6 to avoid forcing upgrade of Go to
1.21. This allows us to keep compatibility with the currently supported
upstream Go releases.

Signed-off-by: SuperQ <superq@gmail.com>

* Make update-copy.sh work for both OSX and GNU sed

Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>

* Name @beorn7 and @krajorama as maintainers for native histograms

I have been the de-facto maintainer for native histograms from the
beginning. So let's put this into MAINTAINERS.md.

In addition, I hereby proposose George Krajcsovits AKA Krajo as a
co-maintainer. He has contributed a lot of native histogram code, but
more importantly, he has contributed substantially to reviewing other
contributors' native histogram code, up to a point where I was merely
rubberstamping the PRs he had already reviewed. I'm confident that he
is ready to to be granted commit rights as outlined in the
"Maintainers" section of the governance:
https://prometheus.io/governance/#maintainers

According to the same section of the governance, I will announce the
proposed change on the developers mailing list and will give some time
for lazy consensus before merging this PR.

Signed-off-by: beorn7 <beorn@grafana.com>

* ui/fix: correct url handling for stacked graphs (#13460)

Signed-off-by: Yury Moladau <yurymolodov@gmail.com>

* tsdb: use cheaper Mutex on series

Mutex is 8 bytes; RWMutex is 24 bytes and much more complicated. Since
`RLock` is only used in two places, `UpdateMetadata` and `Delete`,
neither of which are hotspots, we should use the cheaper one.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Fix last_over_time for native histograms

The last_over_time retains a histogram sample without making a copy.
This sample is now coming from the buffered iterator used for windowing functions,
and can be reused for reading subsequent samples as the iterator progresses.

I would propose copying the sample in the last_over_time function, similar to
how it is done for rate, sum_over_time and others.

Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>

* Implementation

NOTE:
Rebased from main after refactor in #13014

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Add feature flag

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Refactor concurrency control

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Optimising dependencies/dependents funcs to not produce new slices each request

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Refactoring

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Rename flag

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Refactoring for performance, and to allow controller to be overridden

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Block until all rules, both sync & async, have completed evaluating
Updated & added tests
Review feedback nits
Return empty map if not indeterminate
Use highWatermark to track inflight requests counter
Appease the linter
Clarify feature flag

Signed-off-by: Danny Kopping <danny.kopping@grafana.com>

* Fix typo in CLI flag description

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed auto-generated doc

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Improve doc

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Simplify the design to update concurrency controller once the rule evaluation has done

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Add more test cases to TestDependenciesEdgeCases

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Added more test cases to TestDependenciesEdgeCases

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Improved RuleConcurrencyController interface doc

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Introduced sequentialRuleEvalController

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Remove superfluous nil check in Group.metrics

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* api: Serialize discovered and target labels into JSON directly (#13469)

Converted maps into labels.Labels to avoid a lot of copying of data which leads to very high memory consumption while opening the /service-discovery endpoint in the Prometheus UI

Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com>

* api: Serialize discovered labels into JSON directly in dropped targets (#13484)

Converted maps into labels.Labels to avoid a lot of copying of data which leads to very high memory consumption while opening the /service-discovery endpoint in the Prometheus UI

Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com>

* Add ShardedPostings() support to TSDB (#10421)

This PR is a reference implementation of the proposal described in #10420.

In addition to what described in #10420, in this PR I've introduced labels.StableHash(). The idea is to offer an hashing function which doesn't change over time, and that's used by query sharding in order to get a stable behaviour over time. The implementation of labels.StableHash() is the hashing function used by Prometheus before stringlabels, and what's used by Grafana Mimir for query sharding (because built before stringlabels was a thing).

Follow up work
As mentioned in #10420, if this PR is accepted I'm also open to upload another foundamental piece used by Grafana Mimir query sharding to accelerate the query execution: an optional, configurable and fast in-memory cache for the series hashes.

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* storage/remote: document why two benchmarks are skipped

One was silently doing nothing; one was doing something but the work
didn't go up linearly with iteration count.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Pod status changes not discovered by Kube Endpoints SD (#13337)

* fix(discovery/kubernetes/endpoints): react to changes on Pods because some modifications can occur on them without triggering an update on the related Endpoints (The Pod phase changing from Pending to Running e.g.).

---------

Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Co-authored-by: Guillermo Sanchez Gavier <gsanchez@newrelic.com>

* Small improvements, add const, remove copypasta (#8106)

Signed-off-by: Mikhail Fesenko <proggga@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>

* Proposal to improve FPointSlice and HPointSlice allocation. (#13448)

* Reusing points slice from previous series when the slice is under utilized
* Adding comments on the bench test

Signed-off-by: Alan Protasio <alanprot@gmail.com>

* lint

Signed-off-by: Nicolás Pazos <npazosmendez@gmail.com>

* go mod tidy

Signed-off-by: Nicolás Pazos <npazosmendez@gmail.com>

---------

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Erik Sommer <ersotech@posteo.de>
Signed-off-by: Linas Medziunas <linas.medziunas@gmail.com>
Signed-off-by: bwplotka <bwplotka@gmail.com>
Signed-off-by: Arianna Vespri <arianna.vespri@yahoo.it>
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Signed-off-by: Daniel Nicholls <daniel.nicholls@resdiary.com>
Signed-off-by: Daniel Kerbel <nmdanny@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jan Fajerski <jfajersk@redhat.com>
Signed-off-by: Kevin Mingtarja <kevin.mingtarja@gmail.com>
Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Signed-off-by: Marc Tuduri <marctc@protonmail.com>
Signed-off-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Signed-off-by: Augustin Husson <augustin.husson@amadeus.com>
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
Signed-off-by: Kumar Kalpadiptya Roy <kalpadiptya.roy@outlook.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: tyltr <tylitianrui@126.com>
Signed-off-by: Ted Robertson 10043369+tredondo@users.noreply.github.com
Signed-off-by: Ivan Babrou <github@ivan.computer>
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Signed-off-by: Goutham <gouthamve@gmail.com>
Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
Signed-off-by: Ben Ye <benye@amazon.com>
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Signed-off-by: SuperQ <superq@gmail.com>
Signed-off-by: Ben Kochie <superq@gmail.com>
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Signed-off-by: beorn7 <beorn@grafana.com>
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
Signed-off-by: Yury Moladau <yurymolodov@gmail.com>
Signed-off-by: Danny Kopping <danny.kopping@grafana.com>
Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com>
Signed-off-by: Mikhail Fesenko <proggga@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Signed-off-by: Alan Protasio <alanprot@gmail.com>
Signed-off-by: Nicolás Pazos <npazosmendez@gmail.com>
Co-authored-by: Julian Wiedmann <jwi@linux.ibm.com>
Co-authored-by: Bryan Boreham <bjboreham@gmail.com>
Co-authored-by: Erik Sommer <ersotech@posteo.de>
Co-authored-by: Linas Medziunas <linas.medziunas@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Co-authored-by: Arianna Vespri <arianna.vespri@yahoo.it>
Co-authored-by: machine424 <ayoubmrini424@gmail.com>
Co-authored-by: daniel-resdiary <109083091+daniel-resdiary@users.noreply.github.com>
Co-authored-by: Daniel Kerbel <nmdanny@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jan Fajerski <jfajersk@redhat.com>
Co-authored-by: Kevin Mingtarja <kevin.mingtarja@gmail.com>
Co-authored-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>
Co-authored-by: Marc Tudurí <marctc@protonmail.com>
Co-authored-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Co-authored-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Co-authored-by: Augustin Husson <husson.augustin@gmail.com>
Co-authored-by: Björn Rabenstein <beorn@grafana.com>
Co-authored-by: zenador <zenador@users.noreply.github.com>
Co-authored-by: gotjosh <josue.abreu@gmail.com>
Co-authored-by: Ben Kochie <superq@gmail.com>
Co-authored-by: Kumar Kalpadiptya Roy <kalpadiptya.roy@outlook.com>
Co-authored-by: Marco Pracucci <marco@pracucci.com>
Co-authored-by: tyltr <tylitianrui@126.com>
Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com>
Co-authored-by: Julien Pivotto <roidelapluie@o11y.eu>
Co-authored-by: Matthias Loibl <mail@matthiasloibl.com>
Co-authored-by: Ivan Babrou <github@ivan.computer>
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>
Co-authored-by: Israel Blancas <iblancasa@gmail.com>
Co-authored-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: Goutham <gouthamve@gmail.com>
Co-authored-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com>
Co-authored-by: Chris Marchbanks <csmarchbanks@gmail.com>
Co-authored-by: Ben Ye <benye@amazon.com>
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Co-authored-by: Paulin Todev <paulin.todev@gmail.com>
Co-authored-by: Filip Petkovski <filip.petkovsky@gmail.com>
Co-authored-by: Yury Molodov <yurymolodov@gmail.com>
Co-authored-by: Danny Kopping <danny.kopping@grafana.com>
Co-authored-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com>
Co-authored-by: Guillermo Sanchez Gavier <gsanchez@newrelic.com>
Co-authored-by: Mikhail Fesenko <proggga@gmail.com>
Co-authored-by: Alan Protasio <alanprot@gmail.com>
2024-02-02 10:38:50 -08:00

1725 lines
56 KiB
Go

// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package promql
import (
"fmt"
"math"
"sort"
"strconv"
"strings"
"time"
"github.com/facette/natsort"
"github.com/grafana/regexp"
"github.com/prometheus/common/model"
"golang.org/x/exp/slices"
"github.com/prometheus/prometheus/model/histogram"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/promql/parser"
"github.com/prometheus/prometheus/promql/parser/posrange"
"github.com/prometheus/prometheus/util/annotations"
)
// FunctionCall is the type of a PromQL function implementation
//
// vals is a list of the evaluated arguments for the function call.
//
// For range vectors it will be a Matrix with one series, instant vectors a
// Vector, scalars a Vector with one series whose value is the scalar
// value,and nil for strings.
//
// args are the original arguments to the function, where you can access
// matrixSelectors, vectorSelectors, and StringLiterals.
//
// enh.Out is a pre-allocated empty vector that you may use to accumulate
// output before returning it. The vectors in vals should not be returned.a
//
// Range vector functions need only return a vector with the right value,
// the metric and timestamp are not needed.
//
// Instant vector functions need only return a vector with the right values and
// metrics, the timestamp are not needed.
//
// Scalar results should be returned as the value of a sample in a Vector.
type FunctionCall func(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations)
// === time() float64 ===
func funcTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return Vector{Sample{
F: float64(enh.Ts) / 1000,
}}, nil
}
// extrapolatedRate is a utility function for rate/increase/delta.
// It calculates the rate (allowing for counter resets if isCounter is true),
// extrapolates if the first/last sample is close to the boundary, and returns
// the result as either per-second (if isRate is true) or overall.
func extrapolatedRate(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper, isCounter, isRate bool) (Vector, annotations.Annotations) {
ms := args[0].(*parser.MatrixSelector)
vs := ms.VectorSelector.(*parser.VectorSelector)
var (
samples = vals[0].(Matrix)[0]
rangeStart = enh.Ts - durationMilliseconds(ms.Range+vs.Offset)
rangeEnd = enh.Ts - durationMilliseconds(vs.Offset)
resultFloat float64
resultHistogram *histogram.FloatHistogram
firstT, lastT int64
numSamplesMinusOne int
annos annotations.Annotations
)
// We need either at least two Histograms and no Floats, or at least two
// Floats and no Histograms to calculate a rate. Otherwise, drop this
// Vector element.
metricName := samples.Metric.Get(labels.MetricName)
if len(samples.Histograms) > 0 && len(samples.Floats) > 0 {
return enh.Out, annos.Add(annotations.NewMixedFloatsHistogramsWarning(metricName, args[0].PositionRange()))
}
switch {
case len(samples.Histograms) > 1:
numSamplesMinusOne = len(samples.Histograms) - 1
firstT = samples.Histograms[0].T
lastT = samples.Histograms[numSamplesMinusOne].T
var newAnnos annotations.Annotations
resultHistogram, newAnnos = histogramRate(samples.Histograms, isCounter, metricName, args[0].PositionRange())
if resultHistogram == nil {
// The histograms are not compatible with each other.
return enh.Out, annos.Merge(newAnnos)
}
case len(samples.Floats) > 1:
numSamplesMinusOne = len(samples.Floats) - 1
firstT = samples.Floats[0].T
lastT = samples.Floats[numSamplesMinusOne].T
resultFloat = samples.Floats[numSamplesMinusOne].F - samples.Floats[0].F
if !isCounter {
break
}
// Handle counter resets:
prevValue := samples.Floats[0].F
for _, currPoint := range samples.Floats[1:] {
if currPoint.F < prevValue {
resultFloat += prevValue
}
prevValue = currPoint.F
}
default:
// TODO: add RangeTooShortWarning
return enh.Out, annos
}
// Duration between first/last samples and boundary of range.
durationToStart := float64(firstT-rangeStart) / 1000
durationToEnd := float64(rangeEnd-lastT) / 1000
sampledInterval := float64(lastT-firstT) / 1000
averageDurationBetweenSamples := sampledInterval / float64(numSamplesMinusOne)
// TODO(beorn7): Do this for histograms, too.
if isCounter && resultFloat > 0 && len(samples.Floats) > 0 && samples.Floats[0].F >= 0 {
// Counters cannot be negative. If we have any slope at all
// (i.e. resultFloat went up), we can extrapolate the zero point
// of the counter. If the duration to the zero point is shorter
// than the durationToStart, we take the zero point as the start
// of the series, thereby avoiding extrapolation to negative
// counter values.
durationToZero := sampledInterval * (samples.Floats[0].F / resultFloat)
if durationToZero < durationToStart {
durationToStart = durationToZero
}
}
// If the first/last samples are close to the boundaries of the range,
// extrapolate the result. This is as we expect that another sample
// will exist given the spacing between samples we've seen thus far,
// with an allowance for noise.
extrapolationThreshold := averageDurationBetweenSamples * 1.1
extrapolateToInterval := sampledInterval
if durationToStart < extrapolationThreshold {
extrapolateToInterval += durationToStart
} else {
extrapolateToInterval += averageDurationBetweenSamples / 2
}
if durationToEnd < extrapolationThreshold {
extrapolateToInterval += durationToEnd
} else {
extrapolateToInterval += averageDurationBetweenSamples / 2
}
factor := extrapolateToInterval / sampledInterval
if isRate {
factor /= ms.Range.Seconds()
}
if resultHistogram == nil {
resultFloat *= factor
} else {
resultHistogram.Mul(factor)
}
return append(enh.Out, Sample{F: resultFloat, H: resultHistogram}), annos
}
// histogramRate is a helper function for extrapolatedRate. It requires
// points[0] to be a histogram. It returns nil if any other Point in points is
// not a histogram, and a warning wrapped in an annotation in that case.
// Otherwise, it returns the calculated histogram and an empty annotation.
func histogramRate(points []HPoint, isCounter bool, metricName string, pos posrange.PositionRange) (*histogram.FloatHistogram, annotations.Annotations) {
prev := points[0].H
last := points[len(points)-1].H
if last == nil {
return nil, annotations.New().Add(annotations.NewMixedFloatsHistogramsWarning(metricName, pos))
}
minSchema := prev.Schema
if last.Schema < minSchema {
minSchema = last.Schema
}
var annos annotations.Annotations
// First iteration to find out two things:
// - What's the smallest relevant schema?
// - Are all data points histograms?
// TODO(beorn7): Find a way to check that earlier, e.g. by handing in a
// []FloatPoint and a []HistogramPoint separately.
for _, currPoint := range points[1 : len(points)-1] {
curr := currPoint.H
if curr == nil {
return nil, annotations.New().Add(annotations.NewMixedFloatsHistogramsWarning(metricName, pos))
}
if !isCounter {
continue
}
if curr.CounterResetHint == histogram.GaugeType {
annos.Add(annotations.NewNativeHistogramNotCounterWarning(metricName, pos))
}
if curr.Schema < minSchema {
minSchema = curr.Schema
}
}
h := last.CopyToSchema(minSchema)
h.Sub(prev)
if isCounter {
// Second iteration to deal with counter resets.
for _, currPoint := range points[1:] {
curr := currPoint.H
if curr.DetectReset(prev) {
h.Add(prev)
}
prev = curr
}
} else if points[0].H.CounterResetHint != histogram.GaugeType || points[len(points)-1].H.CounterResetHint != histogram.GaugeType {
annos.Add(annotations.NewNativeHistogramNotGaugeWarning(metricName, pos))
}
h.CounterResetHint = histogram.GaugeType
return h.Compact(0), nil
}
// === delta(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcDelta(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return extrapolatedRate(vals, args, enh, false, false)
}
// === rate(node parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcRate(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return extrapolatedRate(vals, args, enh, true, true)
}
// === increase(node parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcIncrease(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return extrapolatedRate(vals, args, enh, true, false)
}
// === irate(node parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcIrate(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return instantValue(vals, enh.Out, true)
}
// === idelta(node model.ValMatrix) (Vector, Annotations) ===
func funcIdelta(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return instantValue(vals, enh.Out, false)
}
func instantValue(vals []parser.Value, out Vector, isRate bool) (Vector, annotations.Annotations) {
samples := vals[0].(Matrix)[0]
// No sense in trying to compute a rate without at least two points. Drop
// this Vector element.
// TODO: add RangeTooShortWarning
if len(samples.Floats) < 2 {
return out, nil
}
lastSample := samples.Floats[len(samples.Floats)-1]
previousSample := samples.Floats[len(samples.Floats)-2]
var resultValue float64
if isRate && lastSample.F < previousSample.F {
// Counter reset.
resultValue = lastSample.F
} else {
resultValue = lastSample.F - previousSample.F
}
sampledInterval := lastSample.T - previousSample.T
if sampledInterval == 0 {
// Avoid dividing by 0.
return out, nil
}
if isRate {
// Convert to per-second.
resultValue /= float64(sampledInterval) / 1000
}
return append(out, Sample{F: resultValue}), nil
}
// Calculate the trend value at the given index i in raw data d.
// This is somewhat analogous to the slope of the trend at the given index.
// The argument "tf" is the trend factor.
// The argument "s0" is the computed smoothed value.
// The argument "s1" is the computed trend factor.
// The argument "b" is the raw input value.
func calcTrendValue(i int, tf, s0, s1, b float64) float64 {
if i == 0 {
return b
}
x := tf * (s1 - s0)
y := (1 - tf) * b
return x + y
}
// Holt-Winters is similar to a weighted moving average, where historical data has exponentially less influence on the current data.
// Holt-Winter also accounts for trends in data. The smoothing factor (0 < sf < 1) affects how historical data will affect the current
// data. A lower smoothing factor increases the influence of historical data. The trend factor (0 < tf < 1) affects
// how trends in historical data will affect the current data. A higher trend factor increases the influence.
// of trends. Algorithm taken from https://en.wikipedia.org/wiki/Exponential_smoothing titled: "Double exponential smoothing".
func funcHoltWinters(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
samples := vals[0].(Matrix)[0]
// The smoothing factor argument.
sf := vals[1].(Vector)[0].F
// The trend factor argument.
tf := vals[2].(Vector)[0].F
// Check that the input parameters are valid.
if sf <= 0 || sf >= 1 {
panic(fmt.Errorf("invalid smoothing factor. Expected: 0 < sf < 1, got: %f", sf))
}
if tf <= 0 || tf >= 1 {
panic(fmt.Errorf("invalid trend factor. Expected: 0 < tf < 1, got: %f", tf))
}
l := len(samples.Floats)
// Can't do the smoothing operation with less than two points.
if l < 2 {
return enh.Out, nil
}
var s0, s1, b float64
// Set initial values.
s1 = samples.Floats[0].F
b = samples.Floats[1].F - samples.Floats[0].F
// Run the smoothing operation.
var x, y float64
for i := 1; i < l; i++ {
// Scale the raw value against the smoothing factor.
x = sf * samples.Floats[i].F
// Scale the last smoothed value with the trend at this point.
b = calcTrendValue(i-1, tf, s0, s1, b)
y = (1 - sf) * (s1 + b)
s0, s1 = s1, x+y
}
return append(enh.Out, Sample{F: s1}), nil
}
// === sort(node parser.ValueTypeVector) (Vector, Annotations) ===
func funcSort(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
// NaN should sort to the bottom, so take descending sort with NaN first and
// reverse it.
byValueSorter := vectorByReverseValueHeap(vals[0].(Vector))
sort.Sort(sort.Reverse(byValueSorter))
return Vector(byValueSorter), nil
}
// === sortDesc(node parser.ValueTypeVector) (Vector, Annotations) ===
func funcSortDesc(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
// NaN should sort to the bottom, so take ascending sort with NaN first and
// reverse it.
byValueSorter := vectorByValueHeap(vals[0].(Vector))
sort.Sort(sort.Reverse(byValueSorter))
return Vector(byValueSorter), nil
}
// === sort_by_label(vector parser.ValueTypeVector, label parser.ValueTypeString...) (Vector, Annotations) ===
func funcSortByLabel(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
// In case the labels are the same, NaN should sort to the bottom, so take
// ascending sort with NaN first and reverse it.
var anno annotations.Annotations
vals[0], anno = funcSort(vals, args, enh)
labels := stringSliceFromArgs(args[1:])
slices.SortFunc(vals[0].(Vector), func(a, b Sample) int {
// Iterate over each given label
for _, label := range labels {
lv1 := a.Metric.Get(label)
lv2 := b.Metric.Get(label)
if lv1 == lv2 {
continue
}
if natsort.Compare(lv1, lv2) {
return -1
}
return +1
}
return 0
})
return vals[0].(Vector), anno
}
// === sort_by_label_desc(vector parser.ValueTypeVector, label parser.ValueTypeString...) (Vector, Annotations) ===
func funcSortByLabelDesc(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
// In case the labels are the same, NaN should sort to the bottom, so take
// ascending sort with NaN first and reverse it.
var anno annotations.Annotations
vals[0], anno = funcSortDesc(vals, args, enh)
labels := stringSliceFromArgs(args[1:])
slices.SortFunc(vals[0].(Vector), func(a, b Sample) int {
// Iterate over each given label
for _, label := range labels {
lv1 := a.Metric.Get(label)
lv2 := b.Metric.Get(label)
if lv1 == lv2 {
continue
}
if natsort.Compare(lv1, lv2) {
return +1
}
return -1
}
return 0
})
return vals[0].(Vector), anno
}
// === clamp(Vector parser.ValueTypeVector, min, max Scalar) (Vector, Annotations) ===
func funcClamp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
vec := vals[0].(Vector)
min := vals[1].(Vector)[0].F
max := vals[2].(Vector)[0].F
if max < min {
return enh.Out, nil
}
for _, el := range vec {
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: math.Max(min, math.Min(max, el.F)),
})
}
return enh.Out, nil
}
// === clamp_max(Vector parser.ValueTypeVector, max Scalar) (Vector, Annotations) ===
func funcClampMax(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
vec := vals[0].(Vector)
max := vals[1].(Vector)[0].F
for _, el := range vec {
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: math.Min(max, el.F),
})
}
return enh.Out, nil
}
// === clamp_min(Vector parser.ValueTypeVector, min Scalar) (Vector, Annotations) ===
func funcClampMin(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
vec := vals[0].(Vector)
min := vals[1].(Vector)[0].F
for _, el := range vec {
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: math.Max(min, el.F),
})
}
return enh.Out, nil
}
// === round(Vector parser.ValueTypeVector, toNearest=1 Scalar) (Vector, Annotations) ===
func funcRound(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
vec := vals[0].(Vector)
// round returns a number rounded to toNearest.
// Ties are solved by rounding up.
toNearest := float64(1)
if len(args) >= 2 {
toNearest = vals[1].(Vector)[0].F
}
// Invert as it seems to cause fewer floating point accuracy issues.
toNearestInverse := 1.0 / toNearest
for _, el := range vec {
f := math.Floor(el.F*toNearestInverse+0.5) / toNearestInverse
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: f,
})
}
return enh.Out, nil
}
// === Scalar(node parser.ValueTypeVector) Scalar ===
func funcScalar(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
v := vals[0].(Vector)
if len(v) != 1 {
return append(enh.Out, Sample{F: math.NaN()}), nil
}
return append(enh.Out, Sample{F: v[0].F}), nil
}
func aggrOverTime(vals []parser.Value, enh *EvalNodeHelper, aggrFn func(Series) float64) Vector {
el := vals[0].(Matrix)[0]
return append(enh.Out, Sample{F: aggrFn(el)})
}
func aggrHistOverTime(vals []parser.Value, enh *EvalNodeHelper, aggrFn func(Series) *histogram.FloatHistogram) Vector {
el := vals[0].(Matrix)[0]
return append(enh.Out, Sample{H: aggrFn(el)})
}
// === avg_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcAvgOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
firstSeries := vals[0].(Matrix)[0]
if len(firstSeries.Floats) > 0 && len(firstSeries.Histograms) > 0 {
metricName := firstSeries.Metric.Get(labels.MetricName)
return enh.Out, annotations.New().Add(annotations.NewMixedFloatsHistogramsWarning(metricName, args[0].PositionRange()))
}
if len(firstSeries.Floats) == 0 {
// The passed values only contain histograms.
return aggrHistOverTime(vals, enh, func(s Series) *histogram.FloatHistogram {
count := 1
mean := s.Histograms[0].H.Copy()
for _, h := range s.Histograms[1:] {
count++
left := h.H.Copy().Div(float64(count))
right := mean.Copy().Div(float64(count))
toAdd := left.Sub(right)
mean.Add(toAdd)
}
return mean
}), nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
var mean, count, c float64
for _, f := range s.Floats {
count++
if math.IsInf(mean, 0) {
if math.IsInf(f.F, 0) && (mean > 0) == (f.F > 0) {
// The `mean` and `f.F` values are `Inf` of the same sign. They
// can't be subtracted, but the value of `mean` is correct
// already.
continue
}
if !math.IsInf(f.F, 0) && !math.IsNaN(f.F) {
// At this stage, the mean is an infinite. If the added
// value is neither an Inf or a Nan, we can keep that mean
// value.
// This is required because our calculation below removes
// the mean value, which would look like Inf += x - Inf and
// end up as a NaN.
continue
}
}
mean, c = kahanSumInc(f.F/count-mean/count, mean, c)
}
if math.IsInf(mean, 0) {
return mean
}
return mean + c
}), nil
}
// === count_over_time(Matrix parser.ValueTypeMatrix) (Vector, Notes) ===
func funcCountOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return aggrOverTime(vals, enh, func(s Series) float64 {
return float64(len(s.Floats) + len(s.Histograms))
}), nil
}
// === last_over_time(Matrix parser.ValueTypeMatrix) (Vector, Notes) ===
func funcLastOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
el := vals[0].(Matrix)[0]
var f FPoint
if len(el.Floats) > 0 {
f = el.Floats[len(el.Floats)-1]
}
var h HPoint
if len(el.Histograms) > 0 {
h = el.Histograms[len(el.Histograms)-1]
}
if h.H == nil || h.T < f.T {
return append(enh.Out, Sample{
Metric: el.Metric,
F: f.F,
}), nil
}
return append(enh.Out, Sample{
Metric: el.Metric,
H: h.H.Copy(),
}), nil
}
// === mad_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcMadOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Matrix)[0].Floats) == 0 {
return enh.Out, nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
values := make(vectorByValueHeap, 0, len(s.Floats))
for _, f := range s.Floats {
values = append(values, Sample{F: f.F})
}
median := quantile(0.5, values)
values = make(vectorByValueHeap, 0, len(s.Floats))
for _, f := range s.Floats {
values = append(values, Sample{F: math.Abs(f.F - median)})
}
return quantile(0.5, values)
}), nil
}
// === max_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcMaxOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Matrix)[0].Floats) == 0 {
// TODO(beorn7): The passed values only contain
// histograms. max_over_time ignores histograms for now. If
// there are only histograms, we have to return without adding
// anything to enh.Out.
return enh.Out, nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
max := s.Floats[0].F
for _, f := range s.Floats {
if f.F > max || math.IsNaN(max) {
max = f.F
}
}
return max
}), nil
}
// === min_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcMinOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Matrix)[0].Floats) == 0 {
// TODO(beorn7): The passed values only contain
// histograms. min_over_time ignores histograms for now. If
// there are only histograms, we have to return without adding
// anything to enh.Out.
return enh.Out, nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
min := s.Floats[0].F
for _, f := range s.Floats {
if f.F < min || math.IsNaN(min) {
min = f.F
}
}
return min
}), nil
}
// === sum_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcSumOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
firstSeries := vals[0].(Matrix)[0]
if len(firstSeries.Floats) > 0 && len(firstSeries.Histograms) > 0 {
metricName := firstSeries.Metric.Get(labels.MetricName)
return enh.Out, annotations.New().Add(annotations.NewMixedFloatsHistogramsWarning(metricName, args[0].PositionRange()))
}
if len(firstSeries.Floats) == 0 {
// The passed values only contain histograms.
return aggrHistOverTime(vals, enh, func(s Series) *histogram.FloatHistogram {
sum := s.Histograms[0].H.Copy()
for _, h := range s.Histograms[1:] {
sum.Add(h.H)
}
return sum
}), nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
var sum, c float64
for _, f := range s.Floats {
sum, c = kahanSumInc(f.F, sum, c)
}
if math.IsInf(sum, 0) {
return sum
}
return sum + c
}), nil
}
// === quantile_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcQuantileOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
q := vals[0].(Vector)[0].F
el := vals[1].(Matrix)[0]
if len(el.Floats) == 0 {
// TODO(beorn7): The passed values only contain
// histograms. quantile_over_time ignores histograms for now. If
// there are only histograms, we have to return without adding
// anything to enh.Out.
return enh.Out, nil
}
var annos annotations.Annotations
if math.IsNaN(q) || q < 0 || q > 1 {
annos.Add(annotations.NewInvalidQuantileWarning(q, args[0].PositionRange()))
}
values := make(vectorByValueHeap, 0, len(el.Floats))
for _, f := range el.Floats {
values = append(values, Sample{F: f.F})
}
return append(enh.Out, Sample{F: quantile(q, values)}), annos
}
// === stddev_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcStddevOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Matrix)[0].Floats) == 0 {
// TODO(beorn7): The passed values only contain
// histograms. stddev_over_time ignores histograms for now. If
// there are only histograms, we have to return without adding
// anything to enh.Out.
return enh.Out, nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
var count float64
var mean, cMean float64
var aux, cAux float64
for _, f := range s.Floats {
count++
delta := f.F - (mean + cMean)
mean, cMean = kahanSumInc(delta/count, mean, cMean)
aux, cAux = kahanSumInc(delta*(f.F-(mean+cMean)), aux, cAux)
}
return math.Sqrt((aux + cAux) / count)
}), nil
}
// === stdvar_over_time(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcStdvarOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Matrix)[0].Floats) == 0 {
// TODO(beorn7): The passed values only contain
// histograms. stdvar_over_time ignores histograms for now. If
// there are only histograms, we have to return without adding
// anything to enh.Out.
return enh.Out, nil
}
return aggrOverTime(vals, enh, func(s Series) float64 {
var count float64
var mean, cMean float64
var aux, cAux float64
for _, f := range s.Floats {
count++
delta := f.F - (mean + cMean)
mean, cMean = kahanSumInc(delta/count, mean, cMean)
aux, cAux = kahanSumInc(delta*(f.F-(mean+cMean)), aux, cAux)
}
return (aux + cAux) / count
}), nil
}
// === absent(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAbsent(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
if len(vals[0].(Vector)) > 0 {
return enh.Out, nil
}
return append(enh.Out,
Sample{
Metric: createLabelsForAbsentFunction(args[0]),
F: 1,
}), nil
}
// === absent_over_time(Vector parser.ValueTypeMatrix) (Vector, Annotations) ===
// As this function has a matrix as argument, it does not get all the Series.
// This function will return 1 if the matrix has at least one element.
// Due to engine optimization, this function is only called when this condition is true.
// Then, the engine post-processes the results to get the expected output.
func funcAbsentOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return append(enh.Out, Sample{F: 1}), nil
}
// === present_over_time(Vector parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcPresentOverTime(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return aggrOverTime(vals, enh, func(s Series) float64 {
return 1
}), nil
}
func simpleFunc(vals []parser.Value, enh *EvalNodeHelper, f func(float64) float64) Vector {
for _, el := range vals[0].(Vector) {
if el.H == nil { // Process only float samples.
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: f(el.F),
})
}
}
return enh.Out
}
// === abs(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAbs(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Abs), nil
}
// === ceil(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcCeil(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Ceil), nil
}
// === floor(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcFloor(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Floor), nil
}
// === exp(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcExp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Exp), nil
}
// === sqrt(Vector VectorNode) (Vector, Annotations) ===
func funcSqrt(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Sqrt), nil
}
// === ln(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcLn(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Log), nil
}
// === log2(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcLog2(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Log2), nil
}
// === log10(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcLog10(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Log10), nil
}
// === sin(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcSin(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Sin), nil
}
// === cos(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcCos(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Cos), nil
}
// === tan(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcTan(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Tan), nil
}
// === asin(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAsin(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Asin), nil
}
// === acos(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAcos(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Acos), nil
}
// === atan(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAtan(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Atan), nil
}
// === sinh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcSinh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Sinh), nil
}
// === cosh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcCosh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Cosh), nil
}
// === tanh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcTanh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Tanh), nil
}
// === asinh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAsinh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Asinh), nil
}
// === acosh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAcosh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Acosh), nil
}
// === atanh(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcAtanh(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, math.Atanh), nil
}
// === rad(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcRad(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, func(v float64) float64 {
return v * math.Pi / 180
}), nil
}
// === deg(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcDeg(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, func(v float64) float64 {
return v * 180 / math.Pi
}), nil
}
// === pi() Scalar ===
func funcPi(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return Vector{Sample{F: math.Pi}}, nil
}
// === sgn(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcSgn(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return simpleFunc(vals, enh, func(v float64) float64 {
switch {
case v < 0:
return -1
case v > 0:
return 1
default:
return v
}
}), nil
}
// === timestamp(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcTimestamp(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
vec := vals[0].(Vector)
for _, el := range vec {
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: float64(el.T) / 1000,
})
}
return enh.Out, nil
}
func kahanSum(samples []float64) float64 {
var sum, c float64
for _, v := range samples {
sum, c = kahanSumInc(v, sum, c)
}
return sum + c
}
func kahanSumInc(inc, sum, c float64) (newSum, newC float64) {
t := sum + inc
// Using Neumaier improvement, swap if next term larger than sum.
if math.Abs(sum) >= math.Abs(inc) {
c += (sum - t) + inc
} else {
c += (inc - t) + sum
}
return t, c
}
// linearRegression performs a least-square linear regression analysis on the
// provided SamplePairs. It returns the slope, and the intercept value at the
// provided time.
func linearRegression(samples []FPoint, interceptTime int64) (slope, intercept float64) {
var (
n float64
sumX, cX float64
sumY, cY float64
sumXY, cXY float64
sumX2, cX2 float64
initY float64
constY bool
)
initY = samples[0].F
constY = true
for i, sample := range samples {
// Set constY to false if any new y values are encountered.
if constY && i > 0 && sample.F != initY {
constY = false
}
n += 1.0
x := float64(sample.T-interceptTime) / 1e3
sumX, cX = kahanSumInc(x, sumX, cX)
sumY, cY = kahanSumInc(sample.F, sumY, cY)
sumXY, cXY = kahanSumInc(x*sample.F, sumXY, cXY)
sumX2, cX2 = kahanSumInc(x*x, sumX2, cX2)
}
if constY {
if math.IsInf(initY, 0) {
return math.NaN(), math.NaN()
}
return 0, initY
}
sumX += cX
sumY += cY
sumXY += cXY
sumX2 += cX2
covXY := sumXY - sumX*sumY/n
varX := sumX2 - sumX*sumX/n
slope = covXY / varX
intercept = sumY/n - slope*sumX/n
return slope, intercept
}
// === deriv(node parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcDeriv(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
samples := vals[0].(Matrix)[0]
// No sense in trying to compute a derivative without at least two points.
// Drop this Vector element.
if len(samples.Floats) < 2 {
return enh.Out, nil
}
// We pass in an arbitrary timestamp that is near the values in use
// to avoid floating point accuracy issues, see
// https://github.com/prometheus/prometheus/issues/2674
slope, _ := linearRegression(samples.Floats, samples.Floats[0].T)
return append(enh.Out, Sample{F: slope}), nil
}
// === predict_linear(node parser.ValueTypeMatrix, k parser.ValueTypeScalar) (Vector, Annotations) ===
func funcPredictLinear(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
samples := vals[0].(Matrix)[0]
duration := vals[1].(Vector)[0].F
// No sense in trying to predict anything without at least two points.
// Drop this Vector element.
if len(samples.Floats) < 2 {
return enh.Out, nil
}
slope, intercept := linearRegression(samples.Floats, enh.Ts)
return append(enh.Out, Sample{F: slope*duration + intercept}), nil
}
// === histogram_count(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramCount(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
inVec := vals[0].(Vector)
for _, sample := range inVec {
// Skip non-histogram samples.
if sample.H == nil {
continue
}
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: sample.H.Count,
})
}
return enh.Out, nil
}
// === histogram_sum(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramSum(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
inVec := vals[0].(Vector)
for _, sample := range inVec {
// Skip non-histogram samples.
if sample.H == nil {
continue
}
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: sample.H.Sum,
})
}
return enh.Out, nil
}
// === histogram_stddev(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramStdDev(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
inVec := vals[0].(Vector)
for _, sample := range inVec {
// Skip non-histogram samples.
if sample.H == nil {
continue
}
mean := sample.H.Sum / sample.H.Count
var variance, cVariance float64
it := sample.H.AllBucketIterator()
for it.Next() {
bucket := it.At()
var val float64
if bucket.Lower <= 0 && 0 <= bucket.Upper {
val = 0
} else {
val = math.Sqrt(bucket.Upper * bucket.Lower)
}
delta := val - mean
variance, cVariance = kahanSumInc(bucket.Count*delta*delta, variance, cVariance)
}
variance += cVariance
variance /= sample.H.Count
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: math.Sqrt(variance),
})
}
return enh.Out, nil
}
// === histogram_stdvar(Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramStdVar(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
inVec := vals[0].(Vector)
for _, sample := range inVec {
// Skip non-histogram samples.
if sample.H == nil {
continue
}
mean := sample.H.Sum / sample.H.Count
var variance, cVariance float64
it := sample.H.AllBucketIterator()
for it.Next() {
bucket := it.At()
var val float64
if bucket.Lower <= 0 && 0 <= bucket.Upper {
val = 0
} else {
val = math.Sqrt(bucket.Upper * bucket.Lower)
}
delta := val - mean
variance, cVariance = kahanSumInc(bucket.Count*delta*delta, variance, cVariance)
}
variance += cVariance
variance /= sample.H.Count
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: variance,
})
}
return enh.Out, nil
}
// === histogram_fraction(lower, upper parser.ValueTypeScalar, Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramFraction(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
lower := vals[0].(Vector)[0].F
upper := vals[1].(Vector)[0].F
inVec := vals[2].(Vector)
for _, sample := range inVec {
// Skip non-histogram samples.
if sample.H == nil {
continue
}
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: histogramFraction(lower, upper, sample.H),
})
}
return enh.Out, nil
}
// === histogram_quantile(k parser.ValueTypeScalar, Vector parser.ValueTypeVector) (Vector, Annotations) ===
func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
q := vals[0].(Vector)[0].F
inVec := vals[1].(Vector)
var annos annotations.Annotations
if math.IsNaN(q) || q < 0 || q > 1 {
annos.Add(annotations.NewInvalidQuantileWarning(q, args[0].PositionRange()))
}
if enh.signatureToMetricWithBuckets == nil {
enh.signatureToMetricWithBuckets = map[string]*metricWithBuckets{}
} else {
for _, v := range enh.signatureToMetricWithBuckets {
v.buckets = v.buckets[:0]
}
}
var histogramSamples []Sample
for _, sample := range inVec {
// We are only looking for classic buckets here. Remember
// the histograms for later treatment.
if sample.H != nil {
histogramSamples = append(histogramSamples, sample)
continue
}
upperBound, err := strconv.ParseFloat(
sample.Metric.Get(model.BucketLabel), 64,
)
if err != nil {
annos.Add(annotations.NewBadBucketLabelWarning(sample.Metric.Get(labels.MetricName), sample.Metric.Get(model.BucketLabel), args[1].PositionRange()))
continue
}
enh.lblBuf = sample.Metric.BytesWithoutLabels(enh.lblBuf, labels.BucketLabel)
mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]
if !ok {
sample.Metric = labels.NewBuilder(sample.Metric).
Del(excludedLabels...).
Labels()
mb = &metricWithBuckets{sample.Metric, nil}
enh.signatureToMetricWithBuckets[string(enh.lblBuf)] = mb
}
mb.buckets = append(mb.buckets, bucket{upperBound, sample.F})
}
// Now deal with the histograms.
for _, sample := range histogramSamples {
// We have to reconstruct the exact same signature as above for
// a classic histogram, just ignoring any le label.
enh.lblBuf = sample.Metric.Bytes(enh.lblBuf)
if mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]; ok && len(mb.buckets) > 0 {
// At this data point, we have classic histogram
// buckets and a native histogram with the same name and
// labels. Do not evaluate anything.
annos.Add(annotations.NewMixedClassicNativeHistogramsWarning(sample.Metric.Get(labels.MetricName), args[1].PositionRange()))
delete(enh.signatureToMetricWithBuckets, string(enh.lblBuf))
continue
}
enh.Out = append(enh.Out, Sample{
Metric: sample.Metric.DropMetricName(),
F: histogramQuantile(q, sample.H),
})
}
for _, mb := range enh.signatureToMetricWithBuckets {
if len(mb.buckets) > 0 {
res, forcedMonotonicity, _ := bucketQuantile(q, mb.buckets)
enh.Out = append(enh.Out, Sample{
Metric: mb.metric,
F: res,
})
if forcedMonotonicity {
annos.Add(annotations.NewHistogramQuantileForcedMonotonicityInfo(mb.metric.Get(labels.MetricName), args[1].PositionRange()))
}
}
}
return enh.Out, annos
}
// === resets(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcResets(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
floats := vals[0].(Matrix)[0].Floats
histograms := vals[0].(Matrix)[0].Histograms
resets := 0
if len(floats) > 1 {
prev := floats[0].F
for _, sample := range floats[1:] {
current := sample.F
if current < prev {
resets++
}
prev = current
}
}
if len(histograms) > 1 {
prev := histograms[0].H
for _, sample := range histograms[1:] {
current := sample.H
if current.DetectReset(prev) {
resets++
}
prev = current
}
}
return append(enh.Out, Sample{F: float64(resets)}), nil
}
// === changes(Matrix parser.ValueTypeMatrix) (Vector, Annotations) ===
func funcChanges(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
floats := vals[0].(Matrix)[0].Floats
changes := 0
if len(floats) == 0 {
// TODO(beorn7): Only histogram values, still need to add support.
return enh.Out, nil
}
prev := floats[0].F
for _, sample := range floats[1:] {
current := sample.F
if current != prev && !(math.IsNaN(current) && math.IsNaN(prev)) {
changes++
}
prev = current
}
return append(enh.Out, Sample{F: float64(changes)}), nil
}
// === label_replace(Vector parser.ValueTypeVector, dst_label, replacement, src_labelname, regex parser.ValueTypeString) (Vector, Annotations) ===
func funcLabelReplace(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
var (
vector = vals[0].(Vector)
dst = stringFromArg(args[1])
repl = stringFromArg(args[2])
src = stringFromArg(args[3])
regexStr = stringFromArg(args[4])
)
if enh.regex == nil {
var err error
enh.regex, err = regexp.Compile("^(?:" + regexStr + ")$")
if err != nil {
panic(fmt.Errorf("invalid regular expression in label_replace(): %s", regexStr))
}
if !model.LabelNameRE.MatchString(dst) {
panic(fmt.Errorf("invalid destination label name in label_replace(): %s", dst))
}
enh.Dmn = make(map[uint64]labels.Labels, len(enh.Out))
}
for _, el := range vector {
h := el.Metric.Hash()
var outMetric labels.Labels
if l, ok := enh.Dmn[h]; ok {
outMetric = l
} else {
srcVal := el.Metric.Get(src)
indexes := enh.regex.FindStringSubmatchIndex(srcVal)
if indexes == nil {
// If there is no match, no replacement should take place.
outMetric = el.Metric
enh.Dmn[h] = outMetric
} else {
res := enh.regex.ExpandString([]byte{}, repl, srcVal, indexes)
lb := labels.NewBuilder(el.Metric).Del(dst)
if len(res) > 0 {
lb.Set(dst, string(res))
}
outMetric = lb.Labels()
enh.Dmn[h] = outMetric
}
}
enh.Out = append(enh.Out, Sample{
Metric: outMetric,
F: el.F,
H: el.H,
})
}
return enh.Out, nil
}
// === Vector(s Scalar) (Vector, Annotations) ===
func funcVector(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return append(enh.Out,
Sample{
Metric: labels.Labels{},
F: vals[0].(Vector)[0].F,
}), nil
}
// === label_join(vector model.ValVector, dest_labelname, separator, src_labelname...) (Vector, Annotations) ===
func funcLabelJoin(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
var (
vector = vals[0].(Vector)
dst = stringFromArg(args[1])
sep = stringFromArg(args[2])
srcLabels = make([]string, len(args)-3)
)
if enh.Dmn == nil {
enh.Dmn = make(map[uint64]labels.Labels, len(enh.Out))
}
for i := 3; i < len(args); i++ {
src := stringFromArg(args[i])
if !model.LabelName(src).IsValid() {
panic(fmt.Errorf("invalid source label name in label_join(): %s", src))
}
srcLabels[i-3] = src
}
if !model.LabelName(dst).IsValid() {
panic(fmt.Errorf("invalid destination label name in label_join(): %s", dst))
}
srcVals := make([]string, len(srcLabels))
for _, el := range vector {
h := el.Metric.Hash()
var outMetric labels.Labels
if l, ok := enh.Dmn[h]; ok {
outMetric = l
} else {
for i, src := range srcLabels {
srcVals[i] = el.Metric.Get(src)
}
lb := labels.NewBuilder(el.Metric)
strval := strings.Join(srcVals, sep)
if strval == "" {
lb.Del(dst)
} else {
lb.Set(dst, strval)
}
outMetric = lb.Labels()
enh.Dmn[h] = outMetric
}
enh.Out = append(enh.Out, Sample{
Metric: outMetric,
F: el.F,
H: el.H,
})
}
return enh.Out, nil
}
// Common code for date related functions.
func dateWrapper(vals []parser.Value, enh *EvalNodeHelper, f func(time.Time) float64) Vector {
if len(vals) == 0 {
return append(enh.Out,
Sample{
Metric: labels.Labels{},
F: f(time.Unix(enh.Ts/1000, 0).UTC()),
})
}
for _, el := range vals[0].(Vector) {
t := time.Unix(int64(el.F), 0).UTC()
enh.Out = append(enh.Out, Sample{
Metric: el.Metric.DropMetricName(),
F: f(t),
})
}
return enh.Out
}
// === days_in_month(v Vector) Scalar ===
func funcDaysInMonth(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(32 - time.Date(t.Year(), t.Month(), 32, 0, 0, 0, 0, time.UTC).Day())
}), nil
}
// === day_of_month(v Vector) Scalar ===
func funcDayOfMonth(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Day())
}), nil
}
// === day_of_week(v Vector) Scalar ===
func funcDayOfWeek(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Weekday())
}), nil
}
// === day_of_year(v Vector) Scalar ===
func funcDayOfYear(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.YearDay())
}), nil
}
// === hour(v Vector) Scalar ===
func funcHour(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Hour())
}), nil
}
// === minute(v Vector) Scalar ===
func funcMinute(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Minute())
}), nil
}
// === month(v Vector) Scalar ===
func funcMonth(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Month())
}), nil
}
// === year(v Vector) Scalar ===
func funcYear(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) (Vector, annotations.Annotations) {
return dateWrapper(vals, enh, func(t time.Time) float64 {
return float64(t.Year())
}), nil
}
// FunctionCalls is a list of all functions supported by PromQL, including their types.
var FunctionCalls = map[string]FunctionCall{
"abs": funcAbs,
"absent": funcAbsent,
"absent_over_time": funcAbsentOverTime,
"acos": funcAcos,
"acosh": funcAcosh,
"asin": funcAsin,
"asinh": funcAsinh,
"atan": funcAtan,
"atanh": funcAtanh,
"avg_over_time": funcAvgOverTime,
"ceil": funcCeil,
"changes": funcChanges,
"clamp": funcClamp,
"clamp_max": funcClampMax,
"clamp_min": funcClampMin,
"cos": funcCos,
"cosh": funcCosh,
"count_over_time": funcCountOverTime,
"days_in_month": funcDaysInMonth,
"day_of_month": funcDayOfMonth,
"day_of_week": funcDayOfWeek,
"day_of_year": funcDayOfYear,
"deg": funcDeg,
"delta": funcDelta,
"deriv": funcDeriv,
"exp": funcExp,
"floor": funcFloor,
"histogram_count": funcHistogramCount,
"histogram_fraction": funcHistogramFraction,
"histogram_quantile": funcHistogramQuantile,
"histogram_sum": funcHistogramSum,
"histogram_stddev": funcHistogramStdDev,
"histogram_stdvar": funcHistogramStdVar,
"holt_winters": funcHoltWinters,
"hour": funcHour,
"idelta": funcIdelta,
"increase": funcIncrease,
"irate": funcIrate,
"label_replace": funcLabelReplace,
"label_join": funcLabelJoin,
"ln": funcLn,
"log10": funcLog10,
"log2": funcLog2,
"last_over_time": funcLastOverTime,
"mad_over_time": funcMadOverTime,
"max_over_time": funcMaxOverTime,
"min_over_time": funcMinOverTime,
"minute": funcMinute,
"month": funcMonth,
"pi": funcPi,
"predict_linear": funcPredictLinear,
"present_over_time": funcPresentOverTime,
"quantile_over_time": funcQuantileOverTime,
"rad": funcRad,
"rate": funcRate,
"resets": funcResets,
"round": funcRound,
"scalar": funcScalar,
"sgn": funcSgn,
"sin": funcSin,
"sinh": funcSinh,
"sort": funcSort,
"sort_desc": funcSortDesc,
"sort_by_label": funcSortByLabel,
"sort_by_label_desc": funcSortByLabelDesc,
"sqrt": funcSqrt,
"stddev_over_time": funcStddevOverTime,
"stdvar_over_time": funcStdvarOverTime,
"sum_over_time": funcSumOverTime,
"tan": funcTan,
"tanh": funcTanh,
"time": funcTime,
"timestamp": funcTimestamp,
"vector": funcVector,
"year": funcYear,
}
// AtModifierUnsafeFunctions are the functions whose result
// can vary if evaluation time is changed when the arguments are
// step invariant. It also includes functions that use the timestamps
// of the passed instant vector argument to calculate a result since
// that can also change with change in eval time.
var AtModifierUnsafeFunctions = map[string]struct{}{
// Step invariant functions.
"days_in_month": {}, "day_of_month": {}, "day_of_week": {}, "day_of_year": {},
"hour": {}, "minute": {}, "month": {}, "year": {},
"predict_linear": {}, "time": {},
// Uses timestamp of the argument for the result,
// hence unsafe to use with @ modifier.
"timestamp": {},
}
type vectorByValueHeap Vector
func (s vectorByValueHeap) Len() int {
return len(s)
}
func (s vectorByValueHeap) Less(i, j int) bool {
// We compare histograms based on their sum of observations.
// TODO(beorn7): Is that what we want?
vi, vj := s[i].F, s[j].F
if s[i].H != nil {
vi = s[i].H.Sum
}
if s[j].H != nil {
vj = s[j].H.Sum
}
if math.IsNaN(vi) {
return true
}
return vi < vj
}
func (s vectorByValueHeap) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s *vectorByValueHeap) Push(x interface{}) {
*s = append(*s, *(x.(*Sample)))
}
func (s *vectorByValueHeap) Pop() interface{} {
old := *s
n := len(old)
el := old[n-1]
*s = old[0 : n-1]
return el
}
type vectorByReverseValueHeap Vector
func (s vectorByReverseValueHeap) Len() int {
return len(s)
}
func (s vectorByReverseValueHeap) Less(i, j int) bool {
// We compare histograms based on their sum of observations.
// TODO(beorn7): Is that what we want?
vi, vj := s[i].F, s[j].F
if s[i].H != nil {
vi = s[i].H.Sum
}
if s[j].H != nil {
vj = s[j].H.Sum
}
if math.IsNaN(vi) {
return true
}
return vi > vj
}
func (s vectorByReverseValueHeap) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s *vectorByReverseValueHeap) Push(x interface{}) {
*s = append(*s, *(x.(*Sample)))
}
func (s *vectorByReverseValueHeap) Pop() interface{} {
old := *s
n := len(old)
el := old[n-1]
*s = old[0 : n-1]
return el
}
// createLabelsForAbsentFunction returns the labels that are uniquely and exactly matched
// in a given expression. It is used in the absent functions.
func createLabelsForAbsentFunction(expr parser.Expr) labels.Labels {
b := labels.NewBuilder(labels.EmptyLabels())
var lm []*labels.Matcher
switch n := expr.(type) {
case *parser.VectorSelector:
lm = n.LabelMatchers
case *parser.MatrixSelector:
lm = n.VectorSelector.(*parser.VectorSelector).LabelMatchers
default:
return labels.EmptyLabels()
}
// The 'has' map implements backwards-compatibility for historic behaviour:
// e.g. in `absent(x{job="a",job="b",foo="bar"})` then `job` is removed from the output.
// Note this gives arguably wrong behaviour for `absent(x{job="a",job="a",foo="bar"})`.
has := make(map[string]bool, len(lm))
for _, ma := range lm {
if ma.Name == labels.MetricName {
continue
}
if ma.Type == labels.MatchEqual && !has[ma.Name] {
b.Set(ma.Name, ma.Value)
has[ma.Name] = true
} else {
b.Del(ma.Name)
}
}
return b.Labels()
}
func stringFromArg(e parser.Expr) string {
tmp := unwrapStepInvariantExpr(e) // Unwrap StepInvariant
unwrapParenExpr(&tmp) // Optionally unwrap ParenExpr
return tmp.(*parser.StringLiteral).Val
}
func stringSliceFromArgs(args parser.Expressions) []string {
tmp := make([]string, len(args))
for i := 0; i < len(args); i++ {
tmp[i] = stringFromArg(args[i])
}
return tmp
}