Commit graph

268 commits

Author SHA1 Message Date
beorn7 1e13f89039 Return SamplePair istead of *SamplePair consistently
Formalize ZeroSamplePair as return value for non-existing samples.

Change LastSamplePairForFingerprint to return a SamplePair (and not a
pointer to it), which saves allocations in a potentially extremely
frequent call.
2016-02-19 17:00:40 +01:00
beorn7 d290340367 Fix and improve chunkDesc locking 2016-02-19 16:24:38 +01:00
beorn7 0e202dacb4 Streamline series iterator creation
This will fix issue #1035 and will also help to make issue #1264 less
bad.

The fundamental problem in the current code:

In the preload phase, we quite accurately determine which chunks will
be used for the query being executed. However, in the subsequent step
of creating series iterators, the created iterators are referencing
_all_ in-memory chunks in their series, even the un-pinned ones. In
iterator creation, we copy a pointer to each in-memory chunk of a
series into the iterator. While this creates a certain amount of
allocation churn, the worst thing about it is that copying the chunk
pointer out of the chunkDesc requires a mutex acquisition. (Remember
that the iterator will also reference un-pinned chunks, so we need to
acquire the mutex to protect against concurrent eviction.) The worst
case happens if a series doesn't even contain any relevant samples for
the query time range. We notice that during preloading but then we
will still create a series iterator for it. But even for series that
do contain relevant samples, the overhead is quite bad for instant
queries that retrieve a single sample from each series, but still go
through all the effort of series iterator creation. All of that is
particularly bad if a series has many in-memory chunks.

This commit addresses the problem from two sides:

First, it merges preloading and iterator creation into one step,
i.e. the preload call returns an iterator for exactly the preloaded
chunks.

Second, the required mutex acquisition in chunkDesc has been greatly
reduced. That was enabled by a side effect of the first step, which is
that the iterator is only referencing pinned chunks, so there is no
risk of concurrent eviction anymore, and chunks can be accessed
without mutex acquisition.

To simplify the code changes for the above, the long-planned change of
ValueAtTime to ValueAtOrBefore time was performed at the same
time. (It should have been done first, but it kind of accidentally
happened while I was in the middle of writing the series iterator
changes. Sorry for that.) So far, we actively filtered the up to two
values that were returned by ValueAtTime, i.e. we invested work to
retrieve up to two values, and then we invested more work to throw one
of them away.

The SeriesIterator.BoundaryValues method can be removed once #1401 is
fixed. But I really didn't want to load even more changes into this
PR.

Benchmarks:

The BenchmarkFuzz.* benchmarks run 83% faster (i.e. about six times
faster) and allocate 95% fewer bytes. The reason for that is that the
benchmark reads one sample after another from the time series and
creates a new series iterator for each sample read.

To find out how much these improvements matter in practice, I have
mirrored a beefy Prometheus server at SoundCloud that suffers from
both issues #1035 and #1264. To reach steady state that would be
comparable, the server needs to run for 15d. So far, it has run for
1d. The test server currently has only half as many memory time series
and 60% of the memory chunks the main server has. The 90th percentile
rule evaluation cycle time is ~11s on the main server and only ~3s on
the test server. However, these numbers might get much closer over
time.

In addition to performance improvements, this commit removes about 150
LOC.
2016-02-19 16:24:38 +01:00
beorn7 ef3ab96111 Populate first and last time in the chunk descriptor earlier
The First time is kind of trivial as we always know it when we create
a new chunkDesc.

The last time is only know when the chunk is closed, so we have to set
it at that time.

The change saves a lot of digging down into the chunk
itself. Especially the last time is relative expensive as it involves
the creation of an iterator. The first time access now doesn't require
locking, which is also a nice gain.
2016-02-15 14:06:09 +01:00
beorn7 9a3edea477 Remove race condition from TestRetentionCutoff 2016-02-12 12:13:19 +01:00
Julius Volz 9b6d69610a Fix various typos in comments.
Helpfully reported by
https://goreportcard.com/report/github.com/prometheus/prometheus :)
2016-02-10 03:47:00 +01:00
Fabian Reinartz 1f877f3d2a Fix deadlock, structure target logging 2016-02-03 10:39:34 +01:00
Fabian Reinartz 59f1e722df Return error on sample appending 2016-02-02 14:01:44 +01:00
beorn7 ec08c9a391 Rework the way to communicate backpressure (AKA suspended ingestion)
This gives up on the idea to communicate throuh the Append() call (by
either not returning as it is now or returning an error as
suggested/explored elsewhere). Here I have added a Throttled() call,
which has the advantage that it can be called before a whole _batch_
of Append()'s. Scrapes will happen completely or not at all. Same for
rule group evaluations. That's a highly desired behavior (as discussed
elsewhere). The code is even simpler now as the whole ingestion buffer
could be removed.

Logging of throttled mode has been streamlined and will create at most
one message per minute.
2016-02-01 14:45:44 +01:00
beorn7 87ef24cd25 Add instrumentation and refactor things around "rushed mode" 2016-01-26 17:44:21 +01:00
beorn7 a2cd479058 Fix calculation of chunks to persist after restart
Since we are not overestimating the number of chunks to persist
anymore, this commit also adjusts the default value for
-storage.local.memory-chunks. Update of documentation will follow.
2016-01-25 19:33:51 +01:00
beorn7 972d94433a Introduce a hysteresis for "rushed mode"
"Rushed mode" is formerly known as "degraded mode", which is changed
with this commit, too. The name "degraded" was very misleading.

Also, switch into rushed mode if we have too many chunks in memory and
an at least reasonable amount of chunks to persist so that speeding up
persisting chunks can help.
2016-01-25 19:24:37 +01:00
beorn7 14796bdb60 Improve chunkMaxBatchSize doc comment 2016-01-25 18:57:51 +01:00
beorn7 582af1618c Streamline chunk writing
This helps to avoid allocations in the same way we were already doing
it during reading.
2016-01-25 16:36:36 +01:00
beorn7 99b9611351 Remove a race condition from TestRetentionCutoff 2016-01-25 16:36:14 +01:00
beorn7 3f4d22e4c7 Update doc comment
This should have gone into a previous commit, but I forgot to save
this particular file.
2016-01-12 12:38:18 +01:00
beorn7 add2ebdd56 Tolerate the lost+found directory in the data directory 2016-01-11 18:05:36 +01:00
Björn Rabenstein 6293f3a374 Merge pull request #1304 from prometheus/beorn7/storage
Improve handling of series file truncation
2016-01-11 17:27:08 +01:00
beorn7 cb117d8346 Add a series ops metric "purge_on_request"
It counts series deletions triggered via the API.
2016-01-11 17:22:16 +01:00
beorn7 4221c7de5c Improve handling of series file truncation
If only very few chunks are to be truncated from a very large series
file, the rewrite of the file is a lorge overhead. With this change, a
certain ratio of the file has to be dropped to make it happen. While
only causing disk overhead at about the same ratio (by default 10%),
it will cut down I/O by a lot in above scenario.
2016-01-11 16:42:10 +01:00
Fabian Reinartz e3b6ec9784 Switch to common/log 2015-10-03 10:21:43 +02:00
beorn7 22d3a4311a Increase waiting time in TestEvictAndLoadChunkDescs
The test had become flaky with Go1.5.

Theory here is that with Go1.5.x, sleeping for 10ms might not be
enough to wake up another goroutine, possibly because it is used for
GC. 50ms should always be enough due to GC pause guarantees with the
new GC.
2015-09-14 21:09:46 +02:00
Julius Volz af513468eb Fix some dead code, missing error checks, shadowings.
I applied
https://medium.com/@jgautheron/quality-pipeline-for-go-projects-497e34d6567
and was greeted with a deluge of warnings, most of which were not
applicable or really fixable realistically. These are some of the first
ones I decided to fix.
2015-09-14 12:21:34 +02:00
beorn7 daeccdd0e9 Fix DropMetricsForFingerprints
It now deletes the series file also for archived series.

Also, fix a naming error in a doc comment.
2015-09-11 15:47:23 +02:00
Julius Volz ffc5142c54 Merge pull request #1058 from prometheus/check-errors
Fix error checking and logging around checkpointing.
2015-09-07 19:57:16 +02:00
Julius Volz 6774a73878 Fix error checking and logging around checkpointing. 2015-09-07 19:34:59 +02:00
Julius Volz 011faf9057 Fix typo in comment. 2015-09-07 19:15:28 +02:00
Julius Volz 995d3b831d Fix most golint warnings.
This is with `golint -min_confidence=0.5`.

I left several lint warnings untouched because they were either
incorrect or I felt it was better not to change them at the moment.
2015-08-26 12:44:46 +02:00
Fabian Reinartz e061595352 Move COWMetric into storage/metric package 2015-08-25 11:59:07 +02:00
Brian Brazil fdf0d0642e Cast value to float, as that's what the console templates expect. 2015-08-24 16:59:08 +01:00
Fabian Reinartz 1535ef1457 Replace metric.SamplePair with model.SamplePair 2015-08-22 14:52:35 +02:00
Fabian Reinartz c9d396f476 Replace metric.LabelPair with model.LabelPair 2015-08-22 13:32:13 +02:00
Fabian Reinartz 438e232c9b Fix grouping of import blocks 2015-08-22 09:42:45 +02:00
Fabian Reinartz 306e8468a0 Switch from client_golang/model to common/model 2015-08-21 13:33:38 +02:00
Julius Volz f65ef1ed10 Fix wording in shutdown warning. 2015-08-17 14:26:53 +02:00
Brian Brazil 0ec71442cd Storage: Tell users how to avoid crash recovery.
If users see the crash recovery error, the chances are
they aren't shutting down Prometheus correctly. Telling
them how to do so will help them debug and fix the problem.
2015-08-16 10:42:31 +01:00
Laurie Malau 20ad403587 Don't warn/increment metric upon equal timestamps during append.
Perhaps it would be even better to still warn in case the sample value has
changed but the timestamps are equal, but we don't have efficient access
to the last value.
2015-08-09 23:49:49 +02:00
Julius Volz 517badc21d Only do regex lookups when there was no equality match.
For the label matching index-based preselection phase, don't do an OR
between equality and non-equality matchers. Execute only one of the two
(with equality matchers preferred when present).

Fixes https://github.com/prometheus/prometheus/issues/924
2015-07-23 23:13:30 +02:00
beorn7 699946bf32 Fix chunk desc loading.
If all samples in consecutive chunks have the same timestamp, the way
we used to load chunks will fail. With this change, the persist
watermark is used to load the right amount of chunkDescs from disk.

This bug is a possible reason for the rare storage corruption we have
observed.
2015-07-16 13:09:20 +02:00
beorn7 4203849c92 Test chunkDesc eviction and loading 2015-07-16 13:09:13 +02:00
beorn7 37e12df9ff Improve TestAppendOutOfOrder 2015-07-16 12:48:33 +02:00
beorn7 502aa9ded5 Use Has instead of Get for existence test. 2015-07-16 12:26:50 +02:00
beorn7 ff08f0b6fe storage: ensure timestamp monotonicity within series.
Fixes https://github.com/prometheus/prometheus/issues/481

While doing so, clean up and fix a few other things:

- Fix `go vet` warnings (@fabxc to blame ;).

- Fix a racey problem with unarchiving: Whenever we unarchive a
  series, we essentially want to do something with it. However, until
  we have done something with it, it appears like a series that is
  ready to be archived or even purged. So e.g. it would be ignored
  during checkpointing. With this fix, we always load the chunkDescs
  upon unarchiving. This is wasteful if we only want to add a new
  sample to an archived time series, but the (presumably more common)
  case where we access an archived time series in a query doesn't
  become more expensive.

- The change above streamlined the getOrCreateSeries ond
  newMemorySeries flow. Also, the modTime is now always set correctly.

- Fix the leveldb-backed implementation of KeyValueStore.Delete. It
  had the wrong behavior of still returning true, nil if a
  non-existing key has been passed in.
2015-07-15 18:56:53 +02:00
Julius Volz acbc2b8cb6 storage: Fix float->uint conversions on some compilers.
See https://github.com/prometheus/prometheus/issues/887, which will at
least be partially fixed by this.

From the spec https://golang.org/ref/spec#Conversions:

"In all non-constant conversions involving floating-point or complex
values, if the result type cannot represent the value the conversion
succeeds but the result value is implementation-dependent."

This ended up setting the converted values to 0 on Debian's Go 1.4.2
compiler, at least on 32-bit Debians.
2015-07-13 11:19:11 +02:00
beorn7 8c196c1028 Minor doc fixes. 2015-06-23 17:07:18 +02:00
Fabian Reinartz 6bfb4549a6 storage: add LastSamplePairForFingerprint method 2015-06-23 13:45:15 +02:00
Fabian Reinartz dc7d27ab9a retrieval: add honor label handling and parametrized querying.
This commit adds the honor_labels and params arguments to the scrape
config. This allows to specify query parameters used by the scrapers
and handling scraped labels with precedence.
2015-06-23 13:45:14 +02:00
beorn7 9016917d1c Increment dirty counter only if setDirty(true) is called.
Currently, we increment the counter even if setDirty(false) is called,
which sets the storage clean.
2015-06-22 18:12:55 +02:00
Fabian Reinartz 1eff186555 Merge pull request #810 from prometheus/fabxc/lmatch
Match empty labels.
2015-06-22 15:45:50 +02:00
Fabian Reinartz 5b91ea9b36 storage: improve label matching and allow unset matching.
Matching of empty labels now also matches metrics where the label
was not explicitly set to the empty string.
2015-06-22 15:33:44 +02:00
Fabian Reinartz 46df1fd5ea storage/local: add benchmark for label matching. 2015-06-22 15:33:44 +02:00
Fabian Reinartz b105e26f4d storage: remove global flags 2015-06-15 19:01:06 +02:00
Fabian Reinartz 5c6c0e2faa Add storage method to delete time series 2015-06-01 21:23:32 +02:00
Fabian Reinartz 0de6edbdfc Move pkg/ to util/ 2015-06-01 21:12:32 +02:00
Fabian Reinartz 2317b001d0 Move flock package to pkg/flock 2015-06-01 21:12:31 +02:00
Fabian Reinartz 3c8fbf1e15 Move test package to pkg/testutil 2015-06-01 21:12:31 +02:00
Fabian Reinartz aff01e29c3 Limit retrievable samples to retention window.
The storage does not delete data immediately after the retention period.
We don't want to retrieve this data as it causes artifacts.
2015-05-27 13:13:59 +02:00
Fabian Reinartz a92134a947 Merge pull request #724 from prometheus/fabxc/storage-startup
Read from indexing queue during crash recovery.
2015-05-23 16:50:47 +02:00
Fabian Reinartz 6e319532cf Read from indexing queue during crash recovery.
Change #704 introduced a regression that started reading the queue only
after potential crash recovery. When more than the queue capacity was
indexed, Prometheus deadlocked.
2015-05-23 15:32:35 +02:00
beorn7 dbcb3d9333 Use an RW lock to checkpoint fingerprint mappings.
This has to be backported to 0.13.x.
2015-05-23 14:05:05 +02:00
beorn7 3b9ab546e6 Add metrics to count inconsistencies and fp collisions. 2015-05-21 18:46:20 +02:00
Björn Rabenstein c44e7cd105 Merge pull request #706 from prometheus/beorn7/persistence2
Improve iterator performance.
2015-05-21 13:48:52 +02:00
Fabian Reinartz 112a778922 Align int64s for atomic operations 2015-05-21 01:38:50 +02:00
beorn7 3b9c421a69 Weed out all the [Gg]et* method names.
The only exception is getNumChunksToPersist to avoid naming the struct
member numChunksToPersist in a weird way.
2015-05-20 19:13:06 +02:00
Julius Volz 267fd34156 Switch Prometheus to use github.com/prometheus/log.
This change is conceptually very simple, although the diff is large. It
switches logging from "github.com/golang/glog" to
"github.com/prometheus/log", while not actually changing any log
messages. V(1)-style logging has been changed to be log.Debug*().
2015-05-20 18:19:32 +02:00
beorn7 81b190bf45 Remove locking from series iterator. Cache chunk iterators. 2015-05-20 16:19:34 +02:00
beorn7 cd5574bf8a Make chunk and series iterators more efficient. 2015-05-20 16:19:34 +02:00
beorn7 f79c694be5 Add benchmarks for series iterator methods. 2015-05-20 16:19:34 +02:00
Fabian Reinartz f59a449a24 Fix storage test 2015-05-20 16:12:07 +02:00
Fabian Reinartz d8440d75f1 Do not start storage processing before Start() is called. 2015-05-19 13:51:45 +02:00
beorn7 d1a93655a1 Fix typo. 2015-05-11 17:15:30 +02:00
beorn7 7c6466d476 Reserve only ~1M FPs for the mapping.
That reduces the chance of having a fingerprint in the reserved area.
2015-05-08 18:10:56 +02:00
beorn7 ac75dc2812 Avoid archive lookup for known mapped FPs. 2015-05-08 16:39:26 +02:00
beorn7 ed810b45bf Improvements after review. 2015-05-08 13:35:39 +02:00
beorn7 c36e0e05f1 Add crash recovery of fingerprint mappings. 2015-05-07 18:58:14 +02:00
beorn7 2235cec175 Handle fingerprint collisions. 2015-05-07 18:17:59 +02:00
beorn7 9820e5fe99 Use FastFingerprint where appropriate. 2015-05-06 12:00:58 +02:00
Scott Worley e5f92d35fe Fix storage/local tests for 32-bit systems 2015-04-30 14:19:48 -07:00
beorn7 a052d32609 Comment improvement. 2015-04-14 10:49:43 +02:00
beorn7 66fc61f9b7 Make bufPool a member of the persistence struct. 2015-04-14 10:43:09 +02:00
beorn7 b02d900e61 Improve chunk and chunkDesc loading.
Also, clean up some things in the code (especially introduction of the
chunkLenWithHeader constant to avoid the same expression all over the place).

Benchmark results:

BEFORE
BenchmarkLoadChunksSequentially     5000            283580 ns/op          152143 B/op        312 allocs/op
BenchmarkLoadChunksRandomly        20000             82936 ns/op           39310 B/op         99 allocs/op
BenchmarkLoadChunkDescs            10000            110833 ns/op           15092 B/op        345 allocs/op

AFTER
BenchmarkLoadChunksSequentially    10000            146785 ns/op          152285 B/op        315 allocs/op
BenchmarkLoadChunksRandomly        20000             67598 ns/op           39438 B/op        103 allocs/op
BenchmarkLoadChunkDescs            20000             99631 ns/op           12636 B/op        192 allocs/op

Note that everything is obviously loaded from the page cache (as the
benchmark runs thousands of times with very small series files). In a
real-world scenario, I expect a larger impact, as the disk operations
will more often actually hit the disk. To load ~50 sequential chunks,
this reduces the iops from 100 seeks and 100 reads to 1 seek and 1
read.
2015-04-13 21:06:04 +02:00
beorn7 c563398c68 Remove obsolete debug message. 2015-04-13 16:59:52 +02:00
beorn7 c5fa0b90c3 Fix the case where a series in memory has 0 chunks, but chunks on disk.
This is actually completely normal for a freshly unarchived series.

Test added to expose.
2015-04-09 15:57:11 +02:00
beorn7 3035b8bfdd Adaptively reduce the wait time for memory series maintenance.
This will make in-memory series maintenance the faster the more chunks
are waiting for persistence.
2015-04-01 17:52:03 +02:00
beorn7 fbc44d8f95 Add benchmark for loading chunks and chunk descs. 2015-03-19 19:28:21 +01:00
beorn7 6a21f73898 Fixes after review. 2015-03-19 17:54:59 +01:00
beorn7 51d35f4481 Instrument series maintenance durations. 2015-03-19 17:06:16 +01:00
beorn7 12ae6e9203 Increase resilience of the storage against data corruption - step 4.
Step 4: Add a configurable sync'ing of series files after modification.
2015-03-19 15:58:02 +01:00
beorn7 11bd9ce1bd Increase resilience of the storage against data corruption - step 3.
Step 3: Remember the mtime of series files and make use of it to
detect series files that are not the one the checkpoint thinks they
are.
2015-03-19 15:44:11 +01:00
beorn7 e25cca823c Increase resilience of the storage against data corruption - step 2.
Step 2: Add a flag -storage.local.pedantic-checks to check every
series file.

Also, remove countPersistedHeadChunks channel, which is unused.
2015-03-19 12:06:15 +01:00
beorn7 3d8d8928be Increase resilience of the storage against data corruption - step 1.
Step 1: Admit the problem by turning the various "panic"s into logged
errors, followed by marking the persistence as dirty.
2015-03-19 11:49:18 +01:00
beorn7 da7c0461c6 Rename persist queue len/cap to num/max chunks to persist.
Remove deprecated flag storage.incoming-samples-queue-capacity.
2015-03-18 19:36:41 +01:00
beorn7 a075900f9a Merge branch 'beorn7/persistence' into beorn7/ingestion-tweaks 2015-03-18 19:09:31 +01:00
beorn7 1d8fc7d56f Change minor things after code review. 2015-03-18 19:09:07 +01:00
beorn7 be11cb2b07 Remove the sample ingestion channel.
The one central sample ingestion channel has caused a variety of
trouble. This commit removes it. Targets and rule evaluation call an
Append method directly now. To incorporate multiple storage backends
(like OpenTSDB), storage.Tee forks the Append into two different
appenders.

Note that the tsdb queue manager had its own queue anyway. It was a
queue after a queue... Much queue, so overhead...

Targets have their own little buffer (implemented as a channel) to
avoid stalling during an http scrape. But a new scrape will only be
started once the old one is fully ingested.

The contraption of three pipelined ingesters was removed. A Target is
an ingester itself now. Despite more logic in Target, things should be
less confusing now.

Also, remove lint and vet warnings in ast.go.
2015-03-15 14:08:22 +01:00
beorn7 0056eaeb4f Redesign series maintenance and chunk persistence. 2015-03-14 22:05:23 +01:00
beorn7 5bea942d8e Improve various things around chunk encoding.
A number of mostly minor things:

- Rename chunk type -> chunk encoding.

- After all, do not carry around the chunk encoding to all parts of
  the system, but just have one place where the encoding for new
  chunks is set based on the flag. The new approach has caveats as
  well, but the polution of so many method signatures is worse.

- Use the default chunk encoding for new chunks of existing
  series. (Previously, only new _series_ would get chunks with the
  default encoding.)

- Use an enum for chunk encoding. (But keep the version number for the
  flag, for reasons discussed previously.)

- Add encoding() to the chunk interface (so that a chunk knows its own
  encoding - no need to have that in a different top-level function).

- Got rid of newFollowUpChunk (which would keep the existing encoding
  for all chunks of a time series). Now only use newChunk(), which
  will create a chunk encoding according to the flag.

- Simplified transcodeAndAdd.

- Reordered methods of deltaEncodedChunk and doubleDeltaEncoded chunk
  to match the order in the chunk interface.

- Only transcode if the chunk is not yet half full. If more than half
  full, add a new chunk instead.
2015-03-14 19:03:20 +01:00
beorn7 9ecf93526d Sync the checkpoints.
Because that's what should be done with checkpoints.
2015-03-11 19:10:51 +01:00
beorn7 853f971540 Actually use double-delta encoding for transcoding. :-o 2015-03-11 16:52:58 +01:00
beorn7 23ba8a5516 Make floats exact again.
This should do the right thing for the old delta chunks, too.
2015-03-06 17:03:56 +01:00