In the case that a getValuesAtIntervalOp's ExtractSamples() is called
with a current time after the last chunk time, we return without
extracting any further values beyond the last one in the chunk
(correct), but also without advancing the op's time (incorrect). This
leads to an infinite loop in renderView(), since the op is called
repeatedly without ever being advanced and consumed.
This adds handling for this special case. When detecting this case, we
immediately set the op to be consumed, since we would always get a value
after the current time passed in if there was one.
Change-Id: Id99149e07b5188d655331382b8b6a461b677005c
This fixes a bug where an interval op might advance too far past the end
of the currently extracted chunk, effectively skipping over relevant
(to-be-extracted) values in the subsequent chunk. The result: missing
samples at chunk boundaries in the resulting view.
Change-Id: Iebf5d086293a277d330039c69f78e1eaf084b3c8
This also fixes the compaction test, which before worked only because
the input sample sorting was accidentally equal to the resulting on-disk
sample sorting.
Change-Id: I2a21c4b46ba562424b27058fc02eba84fa6a6006
- Most of this is the actual regression test in tiered_test.go.
- Working on that regression tests uncovered problems in
tiered_test.go that are fixed in this commit.
- The 'op.consumed = false' line added to freelist.go was actually not
fixing a bug. Instead, there was no bug at all. So this commit
removes that line again, but adds a regression test to make sure
that the assumed bug is indeed not there (cf. freelist_test.go).
- Removed more code duplication in operation.go (following the same
approach as before, i.e. embedding op type A into op type B if
everything in A is the same as in B with the exception of String()
and ExtractSample()). (This change make struct literals for ops more
clunky, but that only affects tests. No code change whatsoever was
necessary in the actual code after this refactoring.)
- Fix another op leak in tiered.go.
Change-Id: Ia165c52e33290ad4f6aba9c83d92318d4f583517
The initial impetus for this was that it made unmarshalling sample
values much faster.
Other relevant benchmark changes in ns/op:
Benchmark old new speedup
==================================================================
BenchmarkMarshal 179170 127996 1.4x
BenchmarkUnmarshal 404984 132186 3.1x
BenchmarkMemoryGetValueAtTime 57801 50050 1.2x
BenchmarkMemoryGetBoundaryValues 64496 53194 1.2x
BenchmarkMemoryGetRangeValues 66585 54065 1.2x
BenchmarkStreamAdd 45.0 75.3 0.6x
BenchmarkAppendSample1 1157 1587 0.7x
BenchmarkAppendSample10 4090 4284 0.95x
BenchmarkAppendSample100 45660 44066 1.0x
BenchmarkAppendSample1000 579084 582380 1.0x
BenchmarkMemoryAppendRepeatingValues 22796594 22005502 1.0x
Overall, this gives us good speedups in the areas where they matter
most: decoding values from disk and accessing the memory storage (which
is also used for views).
Some of the smaller append examples take minimally longer, but the cost
seems to get amortized over larger appends, so I'm not worried about
these. Also, we're currently not bottlenecked on the write path and have
plenty of other optimizations available in that area if it becomes
necessary.
Memory allocations during appends don't change measurably at all.
Change-Id: I7dc7394edea09506976765551f35b138518db9e8
This doesn't add complex discriminator logic yet, but adds a single
version byte to the beginning of each samples chunk. If we ever need to
change the disk format again, this will make it easy to do so without
having to wipe the entire database.
Change-Id: I60c39274256f790bc2da83167a1effaa174588fe
This fixes https://github.com/prometheus/prometheus/issues/381.
For any stale series we dropped from memory, this bug caused us to also drop
any other series from the labelpair->fingerprints memory index if they had any
label/value-pairs in common with the intentionally dropped series.
To fix this issue more easily, I converted the labelpair->fingerprints index
map values to a utility.Set of clientmodel.Fingerprints. This makes handling
this index much easier in general.
Change-Id: If5e81e202e8c542261bbd9797aa1257376c5c074
Currently, rendering a view is capable of handling multiple ops for
the same fingerprint efficiently. However, this capability requires a
lot of complexity in the code, which we are not using at all because
the way we assemble a viewRequest will never have more than one
operation per fingerprint.
This commit weeds out the said capability, along with all the code
needed for it. It is still possible to have more than one operation
for the same fingerprint, it will just be handled in a less efficient
way (as proven by the unit tests).
As a result, scanjob.go could be removed entirely.
This commit also contains a few related refactorings and removals of
dead code in operation.go, view,go, and freelist.go. Also, the
docstrings received some love.
Change-Id: I032b976e0880151c3f3fdb3234fb65e484f0e2e5
We have seven different types all called like LevelDB.*Options. One
of them is the plain LevelDBOptions. All others are just wrapping that
type without adding anything except clunkier handling.
If there ever was a plan to add more specific options to the various
LevelDB.*Options types, history has proven that nothing like that is
going to happen anytime soon.
To keep the code a bit shorter and more focused on the real (quite
significant) complexities we have to deal with here, this commit
reduces all uses of LevelDBOptions to the actual LevelDBOptions type.
1576 fewer characters to read...
Change-Id: I3d7a2b7ffed78b337aa37f812c53c058329ecaa6
- Mostly docstring fixed/additions.
(Please review these carefully, since most of them were missing, I
had to guess them from an outsider's perspective. (Which on the
other hand proves how desperately required many of these docstrings
are.))
- Removed all uses of new(...) to meet our own style guide (draft).
- Fixed all other 'go vet' and 'golint' issues (except those that are
not fixable (i.e. caused by bugs in or by design of 'go vet' and
'golint')).
- Some trivial refactorings, like reorder functions, minor renames, ...
- Some slightly less trivial refactoring, mostly to reduce code
duplication by embedding types instead of writing many explicit
forwarders.
- Cleaned up the interface structure a bit. (Most significant probably
the removal of the View-like methods from MetricPersistenc. Now they
are only in View and not duplicated anymore.)
- Removed dead code. (Probably not all of it, but it's a first
step...)
- Fixed a leftover in storage/metric/end_to_end_test.go (that made
some parts of the code never execute (incidentally, those parts
were broken (and I fixed them, too))).
Change-Id: Ibcac069940d118a88f783314f5b4595dce6641d5
Problem description:
====================
If a rule evaluation referencing a metric/timeseries M happens at a time
when M doesn't have a memory timeseries yet, looking up the fingerprint
for M (via TieredStorage.GetMetricForFingerprint()) will create a new
Metric object for M which gets both: a) attached to a new empty memory
timeseries (so we don't have to ask disk for the Metric's fingerprint
next time), and b) returned to the rule evaluation layer. However, the
rule evaluation layer replaces the name label (and possibly other
labels) of the metric with the name of the recorded rule. Since both
the rule evaluator and the memory storage share a reference to the same
Metric object, the original memory timeseries will now also be
incorrectly renamed.
Fix:
====
Instead of storing a reference to a shared metric object, take a copy of
the object when creating an empty memory timeseries for caching
purposes.
Change-Id: I9f2172696c16c10b377e6708553a46ef29390f1e
The storage itself should be closed before any of the objects passed into it
are closed (otherwise closing the storage can randomly freeze). Defers are
executed in reverse order, so closing the storage should be the last of the
defer statements.
Change-Id: Id920318b876f5b94767ed48c81221b3456770620
This used to work with Go 1.1, but only because of a compiler bug.
The bug is fixed in Go 1.2, so we have to fix our code now.
Change-Id: I5a9f3a15878afd750e848be33e90b05f3aa055e1
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:
- there could be time.Time values which were out of the range/precision of the
time type that we persist to disk, therefore causing incorrectly ordered keys.
One bug caused by this was:
https://github.com/prometheus/prometheus/issues/367
It would be good to use a timestamp type that's more closely aligned with
what the underlying storage supports.
- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
Unix timestamp (possibly even a 32-bit one). Since we store samples in large
numbers, this seriously affects memory usage. Furthermore, copying/working
with the data will be faster if it's smaller.
*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.
*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.
For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
passed into Target.Scrape()).
When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
telemetry exporting, timers that indicate how frequently to execute some
action, etc.
*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.
Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
This fixes part 2) of https://github.com/prometheus/prometheus/issues/367
(uninitialized time.Time mapping to a higher LevelDB key than "normal"
timestamps).
Change-Id: Ib079974110a7b7c4757948f81fc47d3d29ae43c9
This fixes part 1) of https://github.com/prometheus/prometheus/issues/367 (the
storing of samples with the wrong fingerprint into a compacted chunk, thus
corrupting it).
Change-Id: I4c36d0d2e508e37a0aba90b8ca2ecc78ee03e3f1
This commit fixes a critique of the old storage API design, whereby
the input parameters were always as raw bytes and never Protocol
Buffer messages that encapsulated the data, meaning every place a
read or mutation was conducted needed to manually perform said
translations on its own. This is taxing.
Change-Id: I4786938d0d207cefb7782bd2bd96a517eead186f
While a hack, this change should allow us to serve queries
expeditiously during a flush operation.
Change-Id: I9a483fd1dd2b0638ab24ace960df08773c4a5079
The background curation should be staggered to ensure that disk
I/O yields to user-interactive operations in a timely manner. The
lack of routine prioritization necessitates this.
Change-Id: I9b498a74ccd933ffb856e06fedc167430e521d86
Move the stream to an interface, for a number of additional changes
around it are underway.
Conflicts:
storage/metric/memory.go
Change-Id: I4a5fc176f4a5274a64ebdb1cad52600954c463c3