2015-01-21 11:07:45 -08:00
|
|
|
// Copyright 2014 The Prometheus Authors
|
2014-09-19 09:18:44 -07:00
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
2014-09-16 06:47:24 -07:00
|
|
|
package local
|
2014-06-06 02:55:53 -07:00
|
|
|
|
|
|
|
import (
|
2016-02-19 09:35:30 -08:00
|
|
|
"time"
|
|
|
|
|
2015-08-20 08:18:46 -07:00
|
|
|
"github.com/prometheus/common/model"
|
2015-01-09 02:04:20 -08:00
|
|
|
|
2016-06-23 04:03:41 -07:00
|
|
|
"github.com/prometheus/prometheus/storage"
|
2014-06-06 02:55:53 -07:00
|
|
|
"github.com/prometheus/prometheus/storage/metric"
|
|
|
|
)
|
|
|
|
|
2014-09-19 09:18:44 -07:00
|
|
|
// Storage ingests and manages samples, along with various indexes. All methods
|
2015-03-14 19:36:15 -07:00
|
|
|
// are goroutine-safe. Storage implements storage.SampleAppender.
|
2014-06-06 02:55:53 -07:00
|
|
|
type Storage interface {
|
2016-06-23 04:03:41 -07:00
|
|
|
Querier
|
|
|
|
|
|
|
|
// This SampleAppender needs multiple samples for the same fingerprint to be
|
|
|
|
// submitted in chronological order, from oldest to newest. When Append has
|
|
|
|
// returned, the appended sample might not be queryable immediately. (Use
|
|
|
|
// WaitForIndexing to wait for complete processing.) The implementation might
|
|
|
|
// remove labels with empty value from the provided Sample as those labels
|
|
|
|
// are considered equivalent to a label not present at all.
|
|
|
|
//
|
|
|
|
// Appending is throttled if the Storage has too many chunks in memory
|
2016-01-27 10:07:46 -08:00
|
|
|
// already or has too many chunks waiting for persistence.
|
2016-06-23 04:03:41 -07:00
|
|
|
storage.SampleAppender
|
|
|
|
|
|
|
|
// Drop all time series associated with the given fingerprints.
|
|
|
|
DropMetricsForFingerprints(...model.Fingerprint)
|
|
|
|
// Run the various maintenance loops in goroutines. Returns when the
|
|
|
|
// storage is ready to use. Keeps everything running in the background
|
|
|
|
// until Stop is called.
|
|
|
|
Start() error
|
|
|
|
// Stop shuts down the Storage gracefully, flushes all pending
|
|
|
|
// operations, stops all maintenance loops,and frees all resources.
|
|
|
|
Stop() error
|
|
|
|
// WaitForIndexing returns once all samples in the storage are
|
|
|
|
// indexed. Indexing is needed for FingerprintsForLabelMatchers and
|
|
|
|
// LabelValuesForLabelName and may lag behind.
|
|
|
|
WaitForIndexing()
|
|
|
|
}
|
|
|
|
|
|
|
|
// Querier allows querying a time series storage.
|
|
|
|
type Querier interface {
|
2014-06-06 02:55:53 -07:00
|
|
|
// NewPreloader returns a new Preloader which allows preloading and pinning
|
|
|
|
// series data into memory for use within a query.
|
|
|
|
NewPreloader() Preloader
|
2016-03-08 15:09:42 -08:00
|
|
|
// MetricsForLabelMatchers returns the metrics from storage that satisfy
|
|
|
|
// the given label matchers. At least one label matcher must be
|
storage: improve index lookups
tl;dr: This is not a fundamental solution to the indexing problem
(like tindex is) but it at least avoids utilizing the intersection
problem to the greatest possible amount.
In more detail:
Imagine the following query:
nicely:aggregating:rule{job="foo",env="prod"}
While it uses a nicely aggregating recording rule (which might have a
very low cardinality), Prometheus still intersects the low number of
fingerprints for `{__name__="nicely:aggregating:rule"}` with the many
thousands of fingerprints matching `{job="foo"}` and with the millions
of fingerprints matching `{env="prod"}`. This totally innocuous query
is dead slow if the Prometheus server has a lot of time series with
the `{env="prod"}` label. Ironically, if you make the query more
complicated, it becomes blazingly fast:
nicely:aggregating:rule{job=~"foo",env=~"prod"}
Why so? Because Prometheus only intersects with non-Equal matchers if
there are no Equal matchers. That's good in this case because it
retrieves the few fingerprints for
`{__name__="nicely:aggregating:rule"}` and then starts right ahead to
retrieve the metric for those FPs and checking individually if they
match the other matchers.
This change is generalizing the idea of when to stop intersecting FPs
and go into "retrieve metrics and check them individually against
remaining matchers" mode:
- First, sort all matchers by "expected cardinality". Matchers
matching the empty string are always worst (and never used for
intersections). Equal matchers are in general consider best, but by
using some crude heuristics, we declare some better than others
(instance labels or anything that looks like a recording rule).
- Then go through the matchers until we hit a threshold of remaining
FPs in the intersection. This threshold is higher if we are already
in the non-Equal matcher area as intersection is even more expensive
here.
- Once the threshold has been reached (or we have run out of matchers
that do not match the empty string), start with "retrieve metrics
and check them individually against remaining matchers".
A beefy server at SoundCloud was spending 67% of its CPU time in index
lookups (fingerprintsForLabelPairs), serving mostly a dashboard that
is exclusively built with recording rules. With this change, it spends
only 35% in fingerprintsForLabelPairs. The CPU usage dropped from 26
cores to 18 cores. The median latency for query_range dropped from 14s
to 50ms(!). As expected, higher percentile latency didn't improve that
much because the new approach is _occasionally_ running into the worst
case while the old one was _systematically_ doing so. The 99th
percentile latency is now about as high as the median before (14s)
while it was almost twice as high before (26s).
2016-06-28 11:18:32 -07:00
|
|
|
// specified that does not match the empty string, otherwise an empty
|
|
|
|
// map is returned. The times from and through are hints for the storage
|
|
|
|
// to optimize the search. The storage MAY exclude metrics that have no
|
|
|
|
// samples in the specified interval from the returned map. In doubt,
|
|
|
|
// specify model.Earliest for from and model.Latest for through.
|
2016-03-08 15:09:42 -08:00
|
|
|
MetricsForLabelMatchers(from, through model.Time, matchers ...*metric.LabelMatcher) map[model.Fingerprint]metric.Metric
|
|
|
|
// LastSampleForFingerprint returns the last sample that has been
|
|
|
|
// ingested for the provided fingerprint. If this instance of the
|
2016-02-19 07:46:11 -08:00
|
|
|
// Storage has never ingested a sample for the provided fingerprint (or
|
|
|
|
// the last ingestion is so long ago that the series has been archived),
|
2016-03-08 15:09:42 -08:00
|
|
|
// ZeroSample is returned.
|
|
|
|
LastSampleForFingerprint(model.Fingerprint) model.Sample
|
2014-06-06 02:55:53 -07:00
|
|
|
// Get all of the label values that are associated with a given label name.
|
2015-08-20 08:18:46 -07:00
|
|
|
LabelValuesForLabelName(model.LabelName) model.LabelValues
|
2014-06-06 02:55:53 -07:00
|
|
|
}
|
|
|
|
|
2015-05-04 11:16:01 -07:00
|
|
|
// SeriesIterator enables efficient access of sample values in a series. Its
|
|
|
|
// methods are not goroutine-safe. A SeriesIterator iterates over a snapshot of
|
|
|
|
// a series, i.e. it is safe to continue using a SeriesIterator after or during
|
|
|
|
// modifying the corresponding series, but the iterator will represent the state
|
Handle errors caused by data corruption more gracefully
This requires all the panic calls upon unexpected data to be converted
into errors returned. This pollute the function signatures quite
lot. Well, this is Go...
The ideas behind this are the following:
- panic only if it's a programming error. Data corruptions happen, and
they are not programming errors.
- If we detect a data corruption, we "quarantine" the series,
essentially removing it from the database and putting its data into
a separate directory for forensics.
- Failure during writing to a series file is not considered corruption
automatically. It will call setDirty, though, so that a
crashrecovery upon the next restart will commence and check for
that.
- Series quarantining and setDirty calls are logged and counted in
metrics, but are hidden from the user of the interfaces in
interface.go, whith the notable exception of Append(). The reasoning
is that we treat corruption by removing the corrupted series, i.e. a
query for it will return no results on its next call anyway, so
return no results right now. In the case of Append(), we want to
tell the user that no data has been appended, though.
Minor side effects:
- Now consistently using filepath.* instead of path.*.
- Introduced structured logging where I touched it. This makes things
less consistent, but a complete change to structured logging would
be out of scope for this PR.
2016-02-25 03:23:42 -08:00
|
|
|
// of the series prior to the modification.
|
2014-06-06 02:55:53 -07:00
|
|
|
type SeriesIterator interface {
|
Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.
The fundamental problem in the current code:
In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.
This commit addresses the problem from two sides:
First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.
Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.
To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.
The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.
Benchmarks:
The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.
To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.
In addition to performance improvements, this commit removes about 150
LOC.
2016-02-16 09:47:50 -08:00
|
|
|
// Gets the value that is closest before the given time. In case a value
|
2016-03-02 04:45:17 -08:00
|
|
|
// exists at precisely the given time, that value is returned. If no
|
2016-02-19 07:46:11 -08:00
|
|
|
// applicable value exists, ZeroSamplePair is returned.
|
Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.
The fundamental problem in the current code:
In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.
This commit addresses the problem from two sides:
First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.
Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.
To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.
The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.
Benchmarks:
The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.
To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.
In addition to performance improvements, this commit removes about 150
LOC.
2016-02-16 09:47:50 -08:00
|
|
|
ValueAtOrBeforeTime(model.Time) model.SamplePair
|
2014-09-16 06:47:24 -07:00
|
|
|
// Gets all values contained within a given interval.
|
2015-08-22 05:52:35 -07:00
|
|
|
RangeValues(metric.Interval) []model.SamplePair
|
2014-06-06 02:55:53 -07:00
|
|
|
}
|
|
|
|
|
Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.
The fundamental problem in the current code:
In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.
This commit addresses the problem from two sides:
First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.
Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.
To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.
The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.
Benchmarks:
The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.
To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.
In addition to performance improvements, this commit removes about 150
LOC.
2016-02-16 09:47:50 -08:00
|
|
|
// A Preloader preloads series data necessary for a query into memory, pins it
|
|
|
|
// until released via Close(), and returns an iterator for the pinned data. Its
|
|
|
|
// methods are generally not goroutine-safe.
|
2014-06-06 02:55:53 -07:00
|
|
|
type Preloader interface {
|
2014-10-15 06:53:05 -07:00
|
|
|
PreloadRange(
|
2015-08-20 08:18:46 -07:00
|
|
|
fp model.Fingerprint,
|
2016-03-08 15:09:42 -08:00
|
|
|
from, through model.Time,
|
Handle errors caused by data corruption more gracefully
This requires all the panic calls upon unexpected data to be converted
into errors returned. This pollute the function signatures quite
lot. Well, this is Go...
The ideas behind this are the following:
- panic only if it's a programming error. Data corruptions happen, and
they are not programming errors.
- If we detect a data corruption, we "quarantine" the series,
essentially removing it from the database and putting its data into
a separate directory for forensics.
- Failure during writing to a series file is not considered corruption
automatically. It will call setDirty, though, so that a
crashrecovery upon the next restart will commence and check for
that.
- Series quarantining and setDirty calls are logged and counted in
metrics, but are hidden from the user of the interfaces in
interface.go, whith the notable exception of Append(). The reasoning
is that we treat corruption by removing the corrupted series, i.e. a
query for it will return no results on its next call anyway, so
return no results right now. In the case of Append(), we want to
tell the user that no data has been appended, though.
Minor side effects:
- Now consistently using filepath.* instead of path.*.
- Introduced structured logging where I touched it. This makes things
less consistent, but a complete change to structured logging would
be out of scope for this PR.
2016-02-25 03:23:42 -08:00
|
|
|
) SeriesIterator
|
2016-02-19 09:35:30 -08:00
|
|
|
PreloadInstant(
|
|
|
|
fp model.Fingerprint,
|
|
|
|
timestamp model.Time, stalenessDelta time.Duration,
|
Handle errors caused by data corruption more gracefully
This requires all the panic calls upon unexpected data to be converted
into errors returned. This pollute the function signatures quite
lot. Well, this is Go...
The ideas behind this are the following:
- panic only if it's a programming error. Data corruptions happen, and
they are not programming errors.
- If we detect a data corruption, we "quarantine" the series,
essentially removing it from the database and putting its data into
a separate directory for forensics.
- Failure during writing to a series file is not considered corruption
automatically. It will call setDirty, though, so that a
crashrecovery upon the next restart will commence and check for
that.
- Series quarantining and setDirty calls are logged and counted in
metrics, but are hidden from the user of the interfaces in
interface.go, whith the notable exception of Append(). The reasoning
is that we treat corruption by removing the corrupted series, i.e. a
query for it will return no results on its next call anyway, so
return no results right now. In the case of Append(), we want to
tell the user that no data has been appended, though.
Minor side effects:
- Now consistently using filepath.* instead of path.*.
- Introduced structured logging where I touched it. This makes things
less consistent, but a complete change to structured logging would
be out of scope for this PR.
2016-02-25 03:23:42 -08:00
|
|
|
) SeriesIterator
|
2014-06-06 02:55:53 -07:00
|
|
|
// Close unpins any previously requested series data from memory.
|
|
|
|
Close()
|
|
|
|
}
|
2016-02-19 07:46:11 -08:00
|
|
|
|
|
|
|
// ZeroSamplePair is the pseudo zero-value of model.SamplePair used by the local
|
2016-03-08 15:09:42 -08:00
|
|
|
// package to signal a non-existing sample pair. It is a SamplePair with
|
|
|
|
// timestamp model.Earliest and value 0.0. Note that the natural zero value of
|
|
|
|
// SamplePair has a timestamp of 0, which is possible to appear in a real
|
|
|
|
// SamplePair and thus not suitable to signal a non-existing SamplePair.
|
2016-02-19 07:46:11 -08:00
|
|
|
var ZeroSamplePair = model.SamplePair{Timestamp: model.Earliest}
|
2016-03-08 15:09:42 -08:00
|
|
|
|
|
|
|
// ZeroSample is the pseudo zero-value of model.Sample used by the local package
|
|
|
|
// to signal a non-existing sample. It is a Sample with timestamp
|
|
|
|
// model.Earliest, value 0.0, and metric nil. Note that the natural zero value
|
|
|
|
// of Sample has a timestamp of 0, which is possible to appear in a real
|
|
|
|
// Sample and thus not suitable to signal a non-existing Sample.
|
|
|
|
var ZeroSample = model.Sample{Timestamp: model.Earliest}
|