Unary expressions cause parsing errors if they are done in the lexer
by tokenizing them into the number.
This fix moves unary expressions to the parser.
This commits implements the OR operation between two vectors.
Vector matching using the ON clause is added to limit the set of
labels that define a match between two elements. Group modifiers
(GROUP_LEFT/GROUP_RIGHT) to request many-to-one matching are added.
This adds support for scientific notation in the expression language, as
well as for all possible literal forms of +Inf/-Inf/NaN.
TODO: Keep enough state in the parser/lexer to distinguish contexts in
which "Inf", "NaN", etc. should be parsed as a number vs. parsed as a
label name. Currently, foo{nan="bar"} would be a syntax error. However,
that is an existing bug for all our reserved words. E.g. foo{sum="bar"}
is a syntax error as well. This should be fixed separately.
Since we are now getting really deep into floating point calculation,
the tests had to take into account the precision loss. Since the rule
tests are based on direct line matching in the output, implementing
the "almost equal" semantics was pretty cumbersome, but here we are.
This allows changing the time offset for individual instant and range
vectors in a query.
For example, this returns the value of `foo` 5 minutes in the past
relative to the current query evaluation time:
foo offset 5m
Note that the `offset` modifier always needs to follow the selector
immediately. I.e. the following would be correct:
sum(foo offset 5m) // GOOD.
While the following would be *incorrect*:
sum(foo) offset 5m // INVALID.
The same works for range vectors. This returns the 5-minutes-rate that
`foo` had a week ago:
rate(foo[5m] offset 1w)
This change touches the following components:
* Lexer/parser: additions to correctly parse the new `offset`/`OFFSET`
keyword.
* AST: vector and matrix nodes now have an additional `offset` field.
This is used during their evaluation to adjust query and result times
appropriately.
* Query analyzer: now works on separate sets of ranges and instants per
offset. Isolating different offsets from each other completely in this
way keeps the preloading code relatively simple.
No storage engine changes were needed by this change.
The rules tests have been changed to not probe the internal
implementation details of the query analyzer anymore (how many instants
and ranges have been preloaded). This would also become too cumbersome
to test with the new model, and measuring the result of the query should
be sufficient.
This fixes https://github.com/prometheus/prometheus/issues/529
This fixed https://github.com/prometheus/promdash/issues/201
This is related to #454. Queries now timeout after a duration set by
the -query.timeout flag. The TotalEvalTimer is now started/stopped
inside any of the ast.Eval* functions.
The 2nd isCounter argument to delta is ugly, make it optional as the first step
of deprecating it. This will makes delta only ever applied to gauges.
Add a deriv function to calculate the least squares
slope of a gauge. This is more useful for prediction than delta,
as it isn't as heavily influenced by outliers at the boundaries.
- Move CONTRIBUTORS.md to the more common AUTHORS.
- Added the required NOTICE file.
- Changed "Prometheus Team" to "The Prometheus Authors".
- Reverted the erroneous changes to the Apache License.
It turned out in the end, that only drop_common_metrics() produced any
erroneous output in the old system. The second expression in the test
("sum(testmetric) keeping_extra") already worked in the old code, but
why not keep it in...
The way to test ranged evaluations is a bit clumsy so far, so I want to
build a nicer test framework in the end, where all the test cases can be
specified as text files which specify desired inputs, outputs, query
step widths, etc.
Change-Id: I821859789e69b8232bededf670a1b76e9e8c8ca4
Essentially:
- Remove unused code.
- Make it 'go vet' clean. The only remaining warnings are in generated code.
- Make it 'golint' clean. The only remaining warnings are in gerenated code.
- Smoothed out same minor things.
Change-Id: I3fe5c1fbead27b0e7a9c247fee2f5a45bc2d42c6
- Delete unneeded file view_adapter.go.
- Assessed that we still need the fingerprints in nodes
(to create iterators).
- Turned numMemChunkDescs into a metric.
Change-Id: I29be963c795a075ec00c095f76bf26405535609d
A common problem in Prometheus alerting is to detect when no timeseries
exist for a given metric name and label combination. Unfortunately,
Prometheus alert expressions need to be of vector type, and
"count(nonexistent_metric)" results in an empty vector, yielding no
output vector elements to base an alert on. The newly introduced
absent() function solves this issue:
ALERT FooAbsent IF absent(foo{job="myjob"}) [...]
absent() has the following behavior:
- if the vector passed to it has any elements, it returns an empty
vector.
- if the vector passed to it has no elements, it returns a 1-element
vector with the value 1.
In the second case, absent() tries to be smart about deriving labels of
the 1-element output vector from the input vector:
absent(nonexistent{job="myjob"}) => {job="myjob"}
absent(nonexistent{job="myjob",instance=~".*"}) => {job="myjob"}
absent(sum(nonexistent{job="myjob"})) => {}
That is, if the passed vector is a literal vector selector, it takes all
"=" label matchers as the basis for the output labels, but ignores all
non-equals or regex matchers. Also, if the passed vector results from a
non-selector expression, no labels can be derived.
Change-Id: I948505a1488d50265ab5692a3286bd7c8c70cd78
After many transformations, it doesn't make sense to keep the metric
names, since the result of the transformation is no longer that metric.
This drops the metric name after such transformations and makes the web
UI deal well with missing metric names.
This depends on the current branch on the following things:
- prometheus/client_golang needs to be at
e237cf15c6
in branch "julius/int-fingerprints" (to be merged with new storage)
- prometheus/promdash needs to be at
dd7691c9c2
Change-Id: Ib3c8cad8d647d9854e8c653c424b8c235ccc231d
This removes the dependancy on C leveldb and snappy.
It also takes care of fewer dependencies as they would
anyway not work on any non-Debian, non-Brew system.
Change-Id: Ia70dce1ba8a816a003587927e0b3a3f8ad2fd28c
In addition to the existing by-clause syntax:
sum(<expression>) by (<labels>) [keeping_extra]
...this allows the following new syntax:
sum by (<labels>) [keeping_extra] (<expression>)
Both orderings may be used in a single expression. It is up to the users
to establish guidelines around their usage.
Change-Id: Iba10c9cc5fb6ac62edfcf246d281473e82467992
This allows the following expression syntaxes for selecting timeseries:
foo (already valid before)
foo{} (already valid before)
{job="prometheus"} (new, select all timeseries for job "prometheus")
Omitting both the metric name *and* any label matchers ("" or "{}") will
still yield a syntax error.
To get all timeseries, you could do:
{__name__=~".*"}
or, without relying on knowledge about __metric__:
{job=~".*"}
Change-Id: Ifee000b9ac0184ef6ced18411069c7f2699a2dda
- Staleness delta is no a proper function parameter and not replicated
from package ast.
- Named type 'chunks' replaced by explicit '[]chunk' to avoid confusion.
- For the same reason, replaced 'chunkDescs' by '[]*chunkDescs'.
- Verified that math.Modf is not a speed enhancement over conversion
(actually 5x slower).
- Renamed firstTimeField, lastTimeField into chunkFirstTime and
chunkLastTime.
- Verified unpin() is sufficiently goroutine-safe.
- Decided not to update archivedFingerprintToTimeRange upon series
truncation and added a rationale why.
Change-Id: I863b8d785e5ad9f71eb63e229845eacf1bed8534