It turned out in the end, that only drop_common_metrics() produced any
erroneous output in the old system. The second expression in the test
("sum(testmetric) keeping_extra") already worked in the old code, but
why not keep it in...
The way to test ranged evaluations is a bit clumsy so far, so I want to
build a nicer test framework in the end, where all the test cases can be
specified as text files which specify desired inputs, outputs, query
step widths, etc.
Change-Id: I821859789e69b8232bededf670a1b76e9e8c8ca4
Essentially:
- Remove unused code.
- Make it 'go vet' clean. The only remaining warnings are in generated code.
- Make it 'golint' clean. The only remaining warnings are in gerenated code.
- Smoothed out same minor things.
Change-Id: I3fe5c1fbead27b0e7a9c247fee2f5a45bc2d42c6
- Delete unneeded file view_adapter.go.
- Assessed that we still need the fingerprints in nodes
(to create iterators).
- Turned numMemChunkDescs into a metric.
Change-Id: I29be963c795a075ec00c095f76bf26405535609d
A common problem in Prometheus alerting is to detect when no timeseries
exist for a given metric name and label combination. Unfortunately,
Prometheus alert expressions need to be of vector type, and
"count(nonexistent_metric)" results in an empty vector, yielding no
output vector elements to base an alert on. The newly introduced
absent() function solves this issue:
ALERT FooAbsent IF absent(foo{job="myjob"}) [...]
absent() has the following behavior:
- if the vector passed to it has any elements, it returns an empty
vector.
- if the vector passed to it has no elements, it returns a 1-element
vector with the value 1.
In the second case, absent() tries to be smart about deriving labels of
the 1-element output vector from the input vector:
absent(nonexistent{job="myjob"}) => {job="myjob"}
absent(nonexistent{job="myjob",instance=~".*"}) => {job="myjob"}
absent(sum(nonexistent{job="myjob"})) => {}
That is, if the passed vector is a literal vector selector, it takes all
"=" label matchers as the basis for the output labels, but ignores all
non-equals or regex matchers. Also, if the passed vector results from a
non-selector expression, no labels can be derived.
Change-Id: I948505a1488d50265ab5692a3286bd7c8c70cd78
After many transformations, it doesn't make sense to keep the metric
names, since the result of the transformation is no longer that metric.
This drops the metric name after such transformations and makes the web
UI deal well with missing metric names.
This depends on the current branch on the following things:
- prometheus/client_golang needs to be at
e237cf15c6
in branch "julius/int-fingerprints" (to be merged with new storage)
- prometheus/promdash needs to be at
dd7691c9c2
Change-Id: Ib3c8cad8d647d9854e8c653c424b8c235ccc231d
This removes the dependancy on C leveldb and snappy.
It also takes care of fewer dependencies as they would
anyway not work on any non-Debian, non-Brew system.
Change-Id: Ia70dce1ba8a816a003587927e0b3a3f8ad2fd28c
In addition to the existing by-clause syntax:
sum(<expression>) by (<labels>) [keeping_extra]
...this allows the following new syntax:
sum by (<labels>) [keeping_extra] (<expression>)
Both orderings may be used in a single expression. It is up to the users
to establish guidelines around their usage.
Change-Id: Iba10c9cc5fb6ac62edfcf246d281473e82467992
This allows the following expression syntaxes for selecting timeseries:
foo (already valid before)
foo{} (already valid before)
{job="prometheus"} (new, select all timeseries for job "prometheus")
Omitting both the metric name *and* any label matchers ("" or "{}") will
still yield a syntax error.
To get all timeseries, you could do:
{__name__=~".*"}
or, without relying on knowledge about __metric__:
{job=~".*"}
Change-Id: Ifee000b9ac0184ef6ced18411069c7f2699a2dda
- Staleness delta is no a proper function parameter and not replicated
from package ast.
- Named type 'chunks' replaced by explicit '[]chunk' to avoid confusion.
- For the same reason, replaced 'chunkDescs' by '[]*chunkDescs'.
- Verified that math.Modf is not a speed enhancement over conversion
(actually 5x slower).
- Renamed firstTimeField, lastTimeField into chunkFirstTime and
chunkLastTime.
- Verified unpin() is sufficiently goroutine-safe.
- Decided not to update archivedFingerprintToTimeRange upon series
truncation and added a rationale why.
Change-Id: I863b8d785e5ad9f71eb63e229845eacf1bed8534
To achieve O(log n * k) runtime, this uses a heap to track the current
bottom-k or top-k elements while iterating over the full set of
available elements.
It would be possible to reuse more code between topk and bottomk, but I
decided for some more duplication for the sake of clarity.
This fixes https://github.com/prometheus/prometheus/issues/399
Change-Id: I7487ddaadbe7acb22ca2cf2283ba6e7915f2b336
- Always spell out the time unit (e.g. milliseconds instead of ms).
- Remove "_total" from the names of metrics that are not counters.
- Make use of the "Namespace" and "Subsystem" fields in the options.
- Removed the "capacity" facet from all metrics about channels/queues.
These are all fixed via command line flags and will never change
during the runtime of a process. Also, they should not be part of
the same metric family. I have added separate metrics for the
capacity of queues as convenience. (They will never change and are
only set once.)
- I left "metric_disk_latency_microseconds" unchanged, although that
metric measures the latency of the storage device, even if it is not
a spinning disk. "SSD" is read by many as "solid state disk", so
it's not too far off. (It should be "solid state drive", of course,
but "metric_drive_latency_microseconds" is probably confusing.)
- Brian suggested to not mix "failure" and "success" outcome in the
same metric family (distinguished by labels). For now, I left it as
it is. We are touching some bigger issue here, especially as other
parts in the Prometheus ecosystem are following the same
principle. We still need to come to terms here and then change
things consistently everywhere.
Change-Id: If799458b450d18f78500f05990301c12525197d3
Add a function to bypass the new auto-escaping.
Add a function to workaround go's templates only allowing passing in one argument.
Change-Id: Id7aa3f95e7c227692dc22108388b1d9b1e2eec99
Move rulemanager to it's own package to break cicrular dependency.
Make NewTestTieredStorage available to tests, remove duplication.
Change-Id: I33b321245a44aa727bfc3614a7c9ae5005b34e03
This was initially motivated by wanting to distribute the rule checker
tool under `tools/rule_checker`. However, this was not possible without
also distributing the LevelDB dynamic libraries because the tool
transitively depended on Levigo:
rule checker -> query layer -> tiered storage layer -> leveldb
This change separates external storage interfaces from the
implementation (tiered storage, leveldb storage, memory storage) by
putting them into separate packages:
- storage/metric: public, implementation-agnostic interfaces
- storage/metric/tiered: tiered storage implementation, including memory
and LevelDB storage.
I initially also considered splitting up the implementation into
separate packages for tiered storage, memory storage, and LevelDB
storage, but these are currently so intertwined that it would be another
major project in itself.
The query layers and most other parts of Prometheus now have notion of
the storage implementation anymore and just use whatever implementation
they get passed in via interfaces.
The rule_checker is now a static binary :)
Change-Id: I793bbf631a8648ca31790e7e772ecf9c2b92f7a0
This allows putting a scalar as the first argument of a binary operator
in which the second argument is a vector:
<scalar> <binop> <vector>
For example,
1 / http_requests_total
...will output a vector in which every sample value is 1 divided by the
respective input vector element.
This even works for filter binary operators now:
1 == http_requests_total
Returns a vector with all values set to 1 for every element in
http_requests_total whose initial value was 1.
Note: For filter binary operators, the resulting values are always taken
from the left-hand-side of the operation, no matter whether the scalar
or the vector argument is the left-hand-side. That is,
1 != http_requests_total
...will set all result vector sample values to 1, although these are
exactly the sample elements that were != 1 in the input vector.
If you want to just filter elements without changing their sample
values, you still need to do:
http_requests_total != 1
The new filter form is a bit exotic, and so probably won't be used
often. But it was easier to implement it than disallow it completely or
change its behavior.
Change-Id: Idd083f2bd3a1219ba1560cf4ace42f5b82e797a5
There are four label-matching ops for selecting timeseries now:
- Equal: =
- NotEqual: !=
- RegexMatch: =~
- RegexNoMatch: !~
Instead of looking up labels by a simple clientmodel.LabelSet (basically
an equals op for every key/value pair in the set), timeseries
fingerprint selection is now done via a list of metric.LabelMatchers.
Change-Id: I510a83f761198e80946146770ebb64e4abc3bb96
The initial impetus for this was that it made unmarshalling sample
values much faster.
Other relevant benchmark changes in ns/op:
Benchmark old new speedup
==================================================================
BenchmarkMarshal 179170 127996 1.4x
BenchmarkUnmarshal 404984 132186 3.1x
BenchmarkMemoryGetValueAtTime 57801 50050 1.2x
BenchmarkMemoryGetBoundaryValues 64496 53194 1.2x
BenchmarkMemoryGetRangeValues 66585 54065 1.2x
BenchmarkStreamAdd 45.0 75.3 0.6x
BenchmarkAppendSample1 1157 1587 0.7x
BenchmarkAppendSample10 4090 4284 0.95x
BenchmarkAppendSample100 45660 44066 1.0x
BenchmarkAppendSample1000 579084 582380 1.0x
BenchmarkMemoryAppendRepeatingValues 22796594 22005502 1.0x
Overall, this gives us good speedups in the areas where they matter
most: decoding values from disk and accessing the memory storage (which
is also used for views).
Some of the smaller append examples take minimally longer, but the cost
seems to get amortized over larger appends, so I'm not worried about
these. Also, we're currently not bottlenecked on the write path and have
plenty of other optimizations available in that area if it becomes
necessary.
Memory allocations during appends don't change measurably at all.
Change-Id: I7dc7394edea09506976765551f35b138518db9e8
Mostly, that means adding compliant doc strings to exported items.
Also, remove 'go vet' warnings where possible. (Some are unfortunately
not to avoid, arguably bugs in 'go vet'.)
Change-Id: I2827b6dd317492864c1383c3de1ea9eac5a219bb
MIN/MAX/SUM/AVG/COUNT aggregations will now by default drop all labels that are
not specifically part of a BY-clause, even if a label value is the same within
all timeseries of an aggregation group. The old behavior of keeping extra
labels may still be switched on by adding KEEPING_EXTRA to the end of an
aggregation statement:
sum(http_requests) by (job, method) keeping_extra
I'm open to better syntax/naming suggestions.
Change-Id: I21d3fe7af9e98552ce3dffa3ce7c0a4ba4c0b4a4
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:
- there could be time.Time values which were out of the range/precision of the
time type that we persist to disk, therefore causing incorrectly ordered keys.
One bug caused by this was:
https://github.com/prometheus/prometheus/issues/367
It would be good to use a timestamp type that's more closely aligned with
what the underlying storage supports.
- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
Unix timestamp (possibly even a 32-bit one). Since we store samples in large
numbers, this seriously affects memory usage. Furthermore, copying/working
with the data will be faster if it's smaller.
*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.
*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.
For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
passed into Target.Scrape()).
When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
telemetry exporting, timers that indicate how frequently to execute some
action, etc.
*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.
Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
The ConsoleLinkForExpression() function now escapes console URLs in such a way
that works both in emails and in HTML.
Change-Id: I917bae0b526cbbac28ccd2a4ec3c5ac03ee4c647
This includes required refactorings to enable replacing the http client (for
testing) and moving the NotificationReq type definitions to the "notifications"
package, so that this package doesn't need to depend on "rules" anymore and
that it can instead use a representation of the required data which only
includes the necessary fields.
This swaps github.com/kivikakk/golex for github.com/cznic/golex.
The old lexer would have taken 3.5 years to load a set of 5000 test rules
(quadratic time complexity for input length), whereas this one takes only 32ms.
Furthermore, since the new lexer is embedded differently, this gets rid of the
global parser variables and makes the rule loader fully reentrant without a
lock.
This also short-circuits optimize() for now, since it is complex to implement
for the new operator, and ops generated by the query layer already fulfill the
needed invariants. We should still investigate later whether to completely
delete operator optimization code or extend it to support
getValueRangeAtIntervalOp operators.
This adds timers around several query-relevant code blocks. For now, the
query timer stats are only logged for queries initiated through the UI.
In other cases (rule evaluations), the stats are simply thrown away.
My hope is that this helps us understand where queries spend time,
especially in cases where they sometimes hang for unusual amounts of
time.
This commit adds telemetry for the Prometheus expression rule
evaluator, which will enable meta-Prometheus monitoring of customers
to ensure that no instance is falling behind in answering routine
queries.
A few other sundry simplifications are introduced, too.
Some users of GetMetricForFingerprint() end up modifying the returned metric
labelset. Since the memory storage's implementation of
GetMetricForFingerprint() returned a pointer to the metric (and maps are
reference types anyways), the external mutation propagated back into the memory
storage.
The fix is to make a copy of the metric before returning it.
This commit updates the documentation, Makefiles, formatting, and
code semantics to support the 1.1. runtime, which includes ...
1. ``make advice``,
2. ``make format``, and
3. ``go fix`` on various targets.
This commit extracts the model.Values truncation behavior into the actual
tiered storage, which uses it and behaves in a peculiar way—notably the
retention of previous elements if the chunk were to ever go empty. This is
done to enable interpolation between sparse sample values in the evaluation
cycle. Nothing necessarily new here—just an extraction.
Now, the model.Values TruncateBefore functionality would do what a user
would expect without any surprises, which is required for the
DeletionProcessor, which may decide to split a large chunk in two if it
determines that the chunk contains the cut-off time.
This also removes the now obsolete scalar count() function and corrects the
expressions test naming (broken in
2202cd71c9 (L6R59))
so that the expression tests will actually run.
This commit drops the Storage interface and just replaces it with a
publicized TieredStorage type. Storage had been anticipated to be
used as a wrapper for testability but just was not used due to
practicality. Merely overengineered. My bad. Anyway, we will
eventually instantiate the TieredStorage dependencies in main.go and
pass them in for more intelligent lifecycle management.
These changes will pave the way for managing the curators without
Law of Demeter violations.
This commit introduces to Prometheus a batch database sample curator,
which corroborates the high watermarks for sample series against the
curation watermark table to see whether a curator of a given type
needs to be run.
The curator is an abstract executor, which runs various curation
strategies across the database. It remarks the progress for each
type of curation processor that runs for a given sample series.
A curation procesor is responsible for effectuating the underlying
batch changes that are request. In this commit, we introduce the
CompactionProcessor, which takes several bits of runtime metadata and
combine sparse sample entries in the database together to form larger
groups. For instance, for a given series it would be possible to
have the curator effectuate the following grouping:
- Samples Older than Two Weeks: Grouped into Bunches of 10000
- Samples Older than One Week: Grouped into Bunches of 1000
- Samples Older than One Day: Grouped into Bunches of 100
- Samples Older than One Hour: Grouped into Bunches of 10
The benefits hereof of such a compaction are 1. a smaller search
space in the database keyspace, 2. better employment of compression
for repetious values, and 3. reduced seek times.