Commit graph

940 commits

Author SHA1 Message Date
Julius Volz 91aebda74d Merge "Interim commit of metric model." 2014-02-25 15:24:17 +01:00
Julius Volz bc6ee6611e Rename persistence_adapter.go -> view_adapter.go
Change-Id: Ib45081393b734531d2f85a02f46e87930aab3273
2014-02-22 22:43:11 +01:00
Julius Volz 3f226c9724 Rename {Scalar,Vector}Literal to {Scalar,Vector}Selector.
Change-Id: Ie92301f47f5f49f30b3a62c365e377108982b080
2014-02-22 22:33:42 +01:00
Julius Volz a8d4a7ce48 Merge "Compact everything to the same sample group size." 2014-02-19 17:28:26 +01:00
Julius Volz 2279fcbac4 Compact everything to the same sample group size.
Change-Id: Ibb4f3a5d76173d64de916ef1eb41ab5d7900c97b
2014-02-19 16:22:20 +01:00
Julius Volz 92ea823e0c Fix alertmanager API path.
Change-Id: Iea6059decb121c7e75c1828406c4e0b3f2fc1c5d
2014-02-19 16:05:54 +01:00
Bjoern Rabenstein 682cf6fc51 Simplify QueryAnalizer.Visit().
Change-Id: I628582a1903b7273e78921e22a475f1dae5ebaae
2014-02-14 15:15:57 +01:00
Bjoern Rabenstein fd63500ed3 Make rules/ast golint clean.
Mostly, that means adding compliant doc strings to exported items.

Also, remove 'go vet' warnings where possible. (Some are unfortunately
not to avoid, arguably bugs in 'go vet'.)

Change-Id: I2827b6dd317492864c1383c3de1ea9eac5a219bb
2014-02-14 15:01:39 +01:00
Johannes 'fish' Ziemke 5e8026779f Make Dockerfile build prometheus in container
This way the binary will be built in a clear environment and prometheus
can be added to the docker index.

Change-Id: I417fb90adf2503c990a96f4bad370b09b102e0b9
2014-02-14 11:47:47 +01:00
Björn Rabenstein 59febe771a Merge "Minor code cleanups." 2014-02-13 15:29:16 +01:00
Julius Volz c4adfc4f25 Minor code cleanups.
Change-Id: Ib3729cf38b107b7f2186ccf410a745e0472e3630
2014-02-13 15:24:43 +01:00
Julius Volz 67ccf7b8e7 Merge "Add -O3 to all C/C++ compiles." 2014-02-13 12:46:36 +01:00
Bjoern Rabenstein 1f90abdc1f Add -O3 to all C/C++ compiles.
So far, we are compiling C/C++ code without any optimization.

In non-representative, but practically relevant tests, the -O3
improved the total query time for a demanding graph by ~20%.

Change-Id: I5e8123650e53a4933ed4fbe63d0b1ca67217b865
2014-02-13 12:37:43 +01:00
Julius Volz 8cadae6102 Merge "Fix LevelDB closing order." 2014-02-03 23:22:30 +01:00
Julius Volz 94666e20b7 Minor test error reporting cleanup.
Change-Id: Ie11c16b4e60de7c179c6d2a86e063f4432e2000f
2014-02-03 12:27:01 +01:00
Julius Volz fd2158e746 Store copy of metric during fingerprint caching
Problem description:
====================
If a rule evaluation referencing a metric/timeseries M happens at a time
when M doesn't have a memory timeseries yet, looking up the fingerprint
for M (via TieredStorage.GetMetricForFingerprint()) will create a new
Metric object for M which gets both: a) attached to a new empty memory
timeseries (so we don't have to ask disk for the Metric's fingerprint
next time), and b) returned to the rule evaluation layer. However, the
rule evaluation layer replaces the name label (and possibly other
labels) of the metric with the name of the recorded rule.  Since both
the rule evaluator and the memory storage share a reference to the same
Metric object, the original memory timeseries will now also be
incorrectly renamed.

Fix:
====
Instead of storing a reference to a shared metric object, take a copy of
the object when creating an empty memory timeseries for caching
purposes.

Change-Id: I9f2172696c16c10b377e6708553a46ef29390f1e
2014-02-02 17:11:08 +01:00
Julius Volz 7e9ecaac3a Add count_scalar() function.
Change-Id: I63f09dd0479d0a6b016f5f857dd39dcbda56c7f9
2014-01-30 13:07:26 +01:00
Julius Volz 718ad2224b Fix LevelDB closing order.
The storage itself should be closed before any of the objects passed into it
are closed (otherwise closing the storage can randomly freeze). Defers are
executed in reverse order, so closing the storage should be the last of the
defer statements.

Change-Id: Id920318b876f5b94767ed48c81221b3456770620
2014-01-28 15:16:06 +01:00
Julius Volz 18d9d00100 Upgrade to Go 1.2.
Change-Id: If8451257487edc4b76f4248f6e6b47c073dea183
2014-01-24 16:13:36 +01:00
Julius Volz b382e8b7bd Remove overly verbose DNS-SD logging line.
Change-Id: Ie4534437ab88b9a6b99f5cb6c2f32c9588c1fff6
2014-01-24 16:09:41 +01:00
Julius Volz 0378c2ca1f Nonexistent labels in BY-clauses shouldn't propagate to result.
This fixes bug 2. of https://github.com/prometheus/prometheus/issues/374

Change-Id: Ia4a13153616bafce5bf10597966b071434422d09
2014-01-24 16:05:30 +01:00
Bjoern Rabenstein c342ad33a0 Fix OperatorError.
This used to work with Go 1.1, but only because of a compiler bug.
The bug is fixed in Go 1.2, so we have to fix our code now.

Change-Id: I5a9f3a15878afd750e848be33e90b05f3aa055e1
2014-01-21 16:49:51 +01:00
Julius Volz d5ef0c64dc Merge "Add optional sample replication to OpenTSDB." 2014-01-08 17:45:08 +01:00
Julius Volz 61d26e8445 Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.

As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:

1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.

2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full.  Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now.  If needed, this may be extended in the
future.

3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.

4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.

Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2014-01-02 18:21:38 +01:00
Julius Volz 7b013e6491 Merge "Replace some uses of obsolete /metrics.json with /metrics (haven't touched test files yet)." 2013-12-18 16:56:30 +01:00
Julius Volz f44f398ea7 Merge "Added DNS-SD lookup counter for successful/unsuccessful lookups" 2013-12-16 14:52:50 +01:00
Stuart Nelson 48a6326d25 Added DNS-SD lookup counter for successful/unsuccessful lookups
Change-Id: I0a71e994a989cecace280b5134a31ebc2ace7591
2013-12-16 08:48:56 -05:00
Julius Volz 97d84239df Merge "Don't keep extra labels in aggregations by default." 2013-12-16 12:54:55 +01:00
Julius Volz 6dc36d0c3e Don't keep extra labels in aggregations by default.
MIN/MAX/SUM/AVG/COUNT aggregations will now by default drop all labels that are
not specifically part of a BY-clause, even if a label value is the same within
all timeseries of an aggregation group. The old behavior of keeping extra
labels may still be switched on by adding KEEPING_EXTRA to the end of an
aggregation statement:

  sum(http_requests) by (job, method) keeping_extra

I'm open to better syntax/naming suggestions.

Change-Id: I21d3fe7af9e98552ce3dffa3ce7c0a4ba4c0b4a4
2013-12-16 12:53:10 +01:00
Stuart Nelson 0c58e388f6 rename curation metrics to prometheus_curation
Change-Id: I6a0bf277e88ea8eb737670b7e865ae20f2cbfb91
2013-12-13 17:45:01 -05:00
Julius Volz 20bfaf80ab Merge "Display filename when encountering bad rule file." 2013-12-13 15:01:02 +01:00
Stuart Nelson 28f59edf16 Added telemetry for counting stored samples
Change-Id: I0f36f7c2738d070ca2f107fcb315f98e46803af3
2013-12-12 10:06:41 -05:00
Julius Volz 3bf3a555b2 Merge "add evalDuration histogram and ruleCount counter for rules" 2013-12-11 22:52:19 +01:00
Stuart Nelson b75adfebad add evalDuration histogram and ruleCount counter for rules
Change-Id: I3508fe72526348d96b8158828388c3ac8d7c3fa9
2013-12-11 15:42:53 -05:00
Julius Volz 77a79d1fc0 Display filename when encountering bad rule file.
Change-Id: I4729371be92c5659a6938145c5fde66771d7be22
2013-12-11 15:44:11 +01:00
Julius Volz fb44580110 Cleanup/fix program termination sequence.
Change-Id: I2bc58a2583fb079c9ef383cfc7a5e0fbe613f1cd
2013-12-11 15:40:32 +01:00
Tobias Schmidt 6947ee9bc9 Try to create metrics root directory if missing
This change tries to be nice and create the metrics directoy first
before erroring out.

Change-Id: I72691cdc32469708cd671c6ef1fb7db55fe60430
2013-12-03 18:16:13 +07:00
Tobias Schmidt 4300ce3dc8 Merge "Ensure that job names are unique in parsed configs." 2013-12-03 12:13:03 +01:00
Julius Volz 78ebc1a61f Ensure that job names are unique in parsed configs.
Change-Id: I6bd89e6401bd924315981db797af21bdf0b81252
2013-12-03 12:10:22 +01:00
Julius Volz 436f3df0e8 Merge "Add note that pbcopy is only available in OSX" 2013-12-03 12:08:55 +01:00
Tobias Schmidt ee7f81b665 Add note that pbcopy is only available in OSX
Change-Id: I4eda3a5a9117b5021fbc6e3625afa01100c39fa6
2013-12-03 18:06:04 +07:00
Julius Volz 740d448983 Use custom timestamp type for sample timestamps and related code.
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:

- there could be time.Time values which were out of the range/precision of the
  time type that we persist to disk, therefore causing incorrectly ordered keys.
  One bug caused by this was:

  https://github.com/prometheus/prometheus/issues/367

  It would be good to use a timestamp type that's more closely aligned with
  what the underlying storage supports.

- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
  Unix timestamp (possibly even a 32-bit one). Since we store samples in large
  numbers, this seriously affects memory usage. Furthermore, copying/working
  with the data will be faster if it's smaller.

*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.

*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.

For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
  passed into Target.Scrape()).

When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
  telemetry exporting, timers that indicate how frequently to execute some
  action, etc.

*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.

Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
2013-12-03 09:11:28 +01:00
Julius Volz 6b7de31a3c Upgrade to LevelDB 1.14.0 to fix LevelDB bugs.
This tentatively fixes https://github.com/prometheus/prometheus/issues/368 due
to an upstream bugfix in snapshotted LevelDB iterator handling, which got fixed
in LevelDB 1.14.0:

https://code.google.com/p/leveldb/issues/detail?id=200

Change-Id: Ib0cc67b7d3dc33913a1c16736eff32ef702c63bf
2013-12-03 09:07:15 +01:00
Julius Volz db015de65b Comment and "go fmt" fixups in compaction tests.
Change-Id: Iaa0eda6a22a5caa0590bae87ff579f9ace21e80a
2013-10-30 17:06:17 +01:00
Johannes 'fish' Ziemke 8c08a5031f Add search domain support to SRV lookups
This adds search domain support by trying to resolve a name by
appending each search domain configured in /etc/resolv.conf until
the query succeeds (NOERROR) and has at least one answer.

Change-Id: Ibdc5138c5d8cc049e11fab90c3d5243d5a06852c
2013-10-29 17:19:49 +01:00
Julius Volz 39417f93ee Merge "Remove usage of gorest." 2013-10-28 10:29:33 +01:00
Julius Volz fceef4137c Fix /metrics endpoint in sample config.
Change-Id: I2daca6a31f536b87aa8e49a2190787ad9d803595
2013-10-28 08:03:58 +01:00
Julius Volz 51408bdfe8 Merge changes I3ffeb091,Idffefea4
* changes:
  Add chunk sanity checking to dumper tool.
  Add compaction regression tests.
2013-10-24 13:58:14 +02:00
Julius Volz 2162e57784 Merge "Fix watermarker default time / LevelDB key ordering bug." 2013-10-24 13:57:48 +02:00
Julius Volz 5e18255920 Merge "Fix chunk corruption compaction bug." 2013-10-24 13:57:31 +02:00