For yacc generated parsers there is the convention to capitalize the names of item types provided by the lexer, which makes it easy to distinct lexer tokens (capitalized) from nonterminal symbols (not capitalized) in language grammars.
This convention is also followed by the (non generated) go compiler (see https://golang.org/pkg/go/token/#Token).
Part of the parser rewrite described in #6256.
Signed-off-by: Tobias Guggenmos <tguggenm@redhat.com>
This is the first step towards a generated lexer as described in #6256.
It adds methods to the parser struct, that make it implement the yyLexer interface required by a yacc generated parser, as described here: https://godoc.org/golang.org/x/tools/cmd/goyacc .
The yyLexer interface is implemented by the parser struct instead of the lexer struct for the following reasons:
* Both parsers have a lookahead that the lexer does not know about. This solution makes it possible to synchronize these lookaheads when switching parsers.
* The routines to handle parser errors are not accessible to the lexer.
Signed-off-by: Tobias Guggenmos <tguggenm@redhat.com>
* promql: Clean up parser struct
The parser struct used two have two somewhat misused fields:
peekCount int
token [3]item
By reading the code carefully one notices, that peekCount always has the value 0 or 1 and that only the first element of token is ever accessed.
To make this clearer, this commit replaces the token array with a single variable and the peekCount int with a boolean.
Signed-off-by: Tobias Guggenmos <tguggenm@redhat.com>
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors
Signed-off-by: tariqibrahim <tariq181290@gmail.com>
* Expose lexer item types
We have generally agreed to expose AST types / values that are necessary
to make sense of the AST outside of the promql package. Currently the
`UnaryExpr`, `BinaryExpr`, and `AggregateExpr` AST nodes store the lexer
item type to indicate the operator type, but since the individual item
types aren't exposed, an external user of the package cannot determine
the operator type. So this PR exposes them.
Although not all item types are required to make sense of the AST (some
are really only used in the lexer), I decided to expose them all here to
be somewhat more consistent. Another option would be to not use lexer
item types at all in AST nodes.
The concrete motivation is my work on the PromQL->Flux transpiler, but
this ought to be useful for other cases as well.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Fix item type names in tests
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Although it is spelling mistakes, it might make an affects
while reading.
Co-Authored-By: Kim Bao Long longkb@vn.fujitsu.com
Signed-off-by: Nguyen Hai Truong <truongnh@vn.fujitsu.com>
When there was an error in the parser, the
lexer goroutine was left running.
Also make runtime panic test actually test things.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* parser test cleanup
- Test against the exported package functions instead of the private functions.
* Improves readability of TestParseSeries
- Moves package function closer to parser function
For instant vectors, if "stale" is the newest sample
ignore the timeseries.
For range vectors, filter out "stale" samples.
Make it possible to inject "stale" samples in promql tests.
The current separation between lexer and parser is a bit fuzzy when it
comes to operators, aggregators and other keywords. The lexer already
tries to determine the type of a token, even though that type might
change depending on the context.
This led to the problematic behavior that no tokens known to the lexer
could be used as label names, including operators (and, by, ...),
aggregators (count, quantile, ...) or other keywords (for, offset, ...).
This change additionally checks whether an identifier is one of these
types. We might want to check whether the specific item identification
should be moved from the lexer to the parser.
tl;dr: This is not a fundamental solution to the indexing problem
(like tindex is) but it at least avoids utilizing the intersection
problem to the greatest possible amount.
In more detail:
Imagine the following query:
nicely:aggregating:rule{job="foo",env="prod"}
While it uses a nicely aggregating recording rule (which might have a
very low cardinality), Prometheus still intersects the low number of
fingerprints for `{__name__="nicely:aggregating:rule"}` with the many
thousands of fingerprints matching `{job="foo"}` and with the millions
of fingerprints matching `{env="prod"}`. This totally innocuous query
is dead slow if the Prometheus server has a lot of time series with
the `{env="prod"}` label. Ironically, if you make the query more
complicated, it becomes blazingly fast:
nicely:aggregating:rule{job=~"foo",env=~"prod"}
Why so? Because Prometheus only intersects with non-Equal matchers if
there are no Equal matchers. That's good in this case because it
retrieves the few fingerprints for
`{__name__="nicely:aggregating:rule"}` and then starts right ahead to
retrieve the metric for those FPs and checking individually if they
match the other matchers.
This change is generalizing the idea of when to stop intersecting FPs
and go into "retrieve metrics and check them individually against
remaining matchers" mode:
- First, sort all matchers by "expected cardinality". Matchers
matching the empty string are always worst (and never used for
intersections). Equal matchers are in general consider best, but by
using some crude heuristics, we declare some better than others
(instance labels or anything that looks like a recording rule).
- Then go through the matchers until we hit a threshold of remaining
FPs in the intersection. This threshold is higher if we are already
in the non-Equal matcher area as intersection is even more expensive
here.
- Once the threshold has been reached (or we have run out of matchers
that do not match the empty string), start with "retrieve metrics
and check them individually against remaining matchers".
A beefy server at SoundCloud was spending 67% of its CPU time in index
lookups (fingerprintsForLabelPairs), serving mostly a dashboard that
is exclusively built with recording rules. With this change, it spends
only 35% in fingerprintsForLabelPairs. The CPU usage dropped from 26
cores to 18 cores. The median latency for query_range dropped from 14s
to 50ms(!). As expected, higher percentile latency didn't improve that
much because the new approach is _occasionally_ running into the worst
case while the old one was _systematically_ doing so. The 99th
percentile latency is now about as high as the median before (14s)
while it was almost twice as high before (26s).
This offers new semantics in allowing on() for matching
two single-element vectors with no known common labels.
Previosuly this was often done using on(dummy).
This also allows making it explict that you meant
to do an aggregation without labels via by().
Fixes#1597.
The labels listed in the group_ modifier will be copied from the one
side to the many side. It will be valid to specify no labels.
This is intended to replace the existing ON/GROUP_* support.,
The `unless` set operator can be used to return all vector elements from
the LHS which do not match the elements on the RHS. A use case is to
return all metrics for nodes which do not have a specific role:
node_load1 unless on(instance) chef_role{role="app"}