We have seven different types all called like LevelDB.*Options. One
of them is the plain LevelDBOptions. All others are just wrapping that
type without adding anything except clunkier handling.
If there ever was a plan to add more specific options to the various
LevelDB.*Options types, history has proven that nothing like that is
going to happen anytime soon.
To keep the code a bit shorter and more focused on the real (quite
significant) complexities we have to deal with here, this commit
reduces all uses of LevelDBOptions to the actual LevelDBOptions type.
1576 fewer characters to read...
Change-Id: I3d7a2b7ffed78b337aa37f812c53c058329ecaa6
- Mostly docstring fixed/additions.
(Please review these carefully, since most of them were missing, I
had to guess them from an outsider's perspective. (Which on the
other hand proves how desperately required many of these docstrings
are.))
- Removed all uses of new(...) to meet our own style guide (draft).
- Fixed all other 'go vet' and 'golint' issues (except those that are
not fixable (i.e. caused by bugs in or by design of 'go vet' and
'golint')).
- Some trivial refactorings, like reorder functions, minor renames, ...
- Some slightly less trivial refactoring, mostly to reduce code
duplication by embedding types instead of writing many explicit
forwarders.
- Cleaned up the interface structure a bit. (Most significant probably
the removal of the View-like methods from MetricPersistenc. Now they
are only in View and not duplicated anymore.)
- Removed dead code. (Probably not all of it, but it's a first
step...)
- Fixed a leftover in storage/metric/end_to_end_test.go (that made
some parts of the code never execute (incidentally, those parts
were broken (and I fixed them, too))).
Change-Id: Ibcac069940d118a88f783314f5b4595dce6641d5
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:
- there could be time.Time values which were out of the range/precision of the
time type that we persist to disk, therefore causing incorrectly ordered keys.
One bug caused by this was:
https://github.com/prometheus/prometheus/issues/367
It would be good to use a timestamp type that's more closely aligned with
what the underlying storage supports.
- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
Unix timestamp (possibly even a 32-bit one). Since we store samples in large
numbers, this seriously affects memory usage. Furthermore, copying/working
with the data will be faster if it's smaller.
*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.
*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.
For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
passed into Target.Scrape()).
When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
telemetry exporting, timers that indicate how frequently to execute some
action, etc.
*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.
Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
This commit fixes a critique of the old storage API design, whereby
the input parameters were always as raw bytes and never Protocol
Buffer messages that encapsulated the data, meaning every place a
read or mutation was conducted needed to manually perform said
translations on its own. This is taxing.
Change-Id: I4786938d0d207cefb7782bd2bd96a517eead186f
This commit is the first of several and should not be regarded as the
desired end state for these cleanups. What this one does it, however,
is wrap the query index writing behind an interface type that can be
injected into the storage stack and have its lifecycle managed
separately as needed. It also would mean we can swap out underlying
implementations to support remote indexing, buffering, no-op indexing
very easily.
In the future, most of the individual index interface members in the
tiered storage will go away in favor of agents that can query and
resolve what they need from the datastore without the user knowing
how and why they work.
There are too many parameters to constructing a LevelDB storage
instance for a construction method, so I've opted to take an
idiomatic approach of embedding them in a struct for easier
mediation and versioning.
An design question was open for me in the beginning was whether to
serialize other types to disk, but Protocol Buffers quickly won out,
which allows us to drop support for other types. This is a good
start to cleaning up a lot of cruft in the storage stack and
can let us eventually decouple the various moving parts into
separate subsystems for easier reasoning.
This commit is not strictly required, but it is a start to making
the rest a lot more enjoyable to interact with.
The previous implementation spawned N goroutines to group samples
together and would not start work until the semaphore unblocked.
While this didn't leak, it polluted the scheduling space. Thusly,
the routine only starts after a semaphore has been acquired.
Some users of GetMetricForFingerprint() end up modifying the returned metric
labelset. Since the memory storage's implementation of
GetMetricForFingerprint() returned a pointer to the metric (and maps are
reference types anyways), the external mutation propagated back into the memory
storage.
The fix is to make a copy of the metric before returning it.
This commit simplifies the way that compactions across a database's
keyspace occur due to reading the LevelDB internals. Secondarily it
introduces the database size estimation mechanisms.
Include database health and help interfaces.
Add database statistics; remove status goroutines.
This commit kills the use of Go routines to expose status throughout
the web components of Prometheus. It also dumps raw LevelDB status
on a separate /databases endpoint.
This commit simplifies the way that compactions across a database's
keyspace occur due to reading the LevelDB internals. Secondarily it
introduces the database size estimation mechanisms.
The curator requires the existence of a curator remark table, which
stores the progress for a given curation policy. The tests for the
curator create an ad hoc table, but core Prometheus presently lacks
said table, which this commit adds.
Secondarily, the error handling for the LevelDB lifecycle functions
in the metric persistence have been wrapped into an UncertaintyGroup,
which mirrors some of the functions of sync.WaitGroup but adds error
capturing capability to the mix.
This commit introduces to Prometheus a batch database sample curator,
which corroborates the high watermarks for sample series against the
curation watermark table to see whether a curator of a given type
needs to be run.
The curator is an abstract executor, which runs various curation
strategies across the database. It remarks the progress for each
type of curation processor that runs for a given sample series.
A curation procesor is responsible for effectuating the underlying
batch changes that are request. In this commit, we introduce the
CompactionProcessor, which takes several bits of runtime metadata and
combine sparse sample entries in the database together to form larger
groups. For instance, for a given series it would be possible to
have the curator effectuate the following grouping:
- Samples Older than Two Weeks: Grouped into Bunches of 10000
- Samples Older than One Week: Grouped into Bunches of 1000
- Samples Older than One Day: Grouped into Bunches of 100
- Samples Older than One Hour: Grouped into Bunches of 10
The benefits hereof of such a compaction are 1. a smaller search
space in the database keyspace, 2. better employment of compression
for repetious values, and 3. reduced seek times.
The curator work can be done easier if dto.SampleKey is no longer
directly accessed but rather has a higher level type around it that
captures a certain modicum of business logic. This doesn't look
terribly interesting today, but it will get more so.
This makes the memory persistence the backing store for views and
adjusts the MetricPersistence interface accordingly. It also removes
unused Get* method implementations from the LevelDB persistence so they
don't need to be adapted to the new interface. In the future, we should
rethink these interfaces.
All staleness and interpolation handling is now removed from the storage
layer and will be handled only by the query layer in the future.
It is the case with the benchmark tool that we thought that we
generated multiple series and saved them to the disk as such, when
in reality, we overwrote the fields of the outgoing metrics via
Go map reference behavior. This was accidental. In the course of
diagnosing this, a few errors were found:
1. ``newSeriesFrontier`` should check to see if the candidate fingerprint is within the given domain of the ``diskFrontier``. If not, as the contract in the docstring stipulates, a ``nil`` ``seriesFrontier`` should be emitted.
2. In the interests of aiding debugging, the raw LevelDB ``levigoIterator`` type now includes a helpful forensics ``String()`` method.
This work produced additional cleanups:
1. ``Close() error`` with the storage stack is technically incorrect, since nowhere in the bowels of it does an error actually occur. The interface has been simplified to remove this for now.
After this commit, we'll need to add validations that it does the
desired work, which we presently know that it doesn't. Given the
changes I made with a plethora of renamings, I want to commit this
now before it gets even larger.
The LevelDB storage types return an interface type now that wraps
around the underlying iterator. This both enhances testability but
improves upon, in my opinion, the interface design for the LevelDB
iterator.
Secondarily, the resource reaping behaviors for the LevelDB iterators
have been improved by dropping the externalized io.Closer object.
Finally, the iterator provisioning methods provide the option for
indicating whether one wants a snapshotted iterator or not.