The Prometheus monitoring system and time series database.
Find a file
beorn7 fc6737b7fb storage: improve index lookups
tl;dr: This is not a fundamental solution to the indexing problem
(like tindex is) but it at least avoids utilizing the intersection
problem to the greatest possible amount.

In more detail:

Imagine the following query:

    nicely:aggregating:rule{job="foo",env="prod"}

While it uses a nicely aggregating recording rule (which might have a
very low cardinality), Prometheus still intersects the low number of
fingerprints for `{__name__="nicely:aggregating:rule"}` with the many
thousands of fingerprints matching `{job="foo"}` and with the millions
of fingerprints matching `{env="prod"}`. This totally innocuous query
is dead slow if the Prometheus server has a lot of time series with
the `{env="prod"}` label. Ironically, if you make the query more
complicated, it becomes blazingly fast:

    nicely:aggregating:rule{job=~"foo",env=~"prod"}

Why so? Because Prometheus only intersects with non-Equal matchers if
there are no Equal matchers. That's good in this case because it
retrieves the few fingerprints for
`{__name__="nicely:aggregating:rule"}` and then starts right ahead to
retrieve the metric for those FPs and checking individually if they
match the other matchers.

This change is generalizing the idea of when to stop intersecting FPs
and go into "retrieve metrics and check them individually against
remaining matchers" mode:

- First, sort all matchers by "expected cardinality". Matchers
  matching the empty string are always worst (and never used for
  intersections). Equal matchers are in general consider best, but by
  using some crude heuristics, we declare some better than others
  (instance labels or anything that looks like a recording rule).

- Then go through the matchers until we hit a threshold of remaining
  FPs in the intersection. This threshold is higher if we are already
  in the non-Equal matcher area as intersection is even more expensive
  here.

- Once the threshold has been reached (or we have run out of matchers
  that do not match the empty string), start with "retrieve metrics
  and check them individually against remaining matchers".

A beefy server at SoundCloud was spending 67% of its CPU time in index
lookups (fingerprintsForLabelPairs), serving mostly a dashboard that
is exclusively built with recording rules. With this change, it spends
only 35% in fingerprintsForLabelPairs. The CPU usage dropped from 26
cores to 18 cores. The median latency for query_range dropped from 14s
to 50ms(!). As expected, higher percentile latency didn't improve that
much because the new approach is _occasionally_ running into the worst
case while the old one was _systematically_ doing so. The 99th
percentile latency is now about as high as the median before (14s)
while it was almost twice as high before (26s).
2016-07-20 17:35:53 +02:00
.github .github: Add issue template 2016-06-06 11:48:14 +02:00
cmd web: return status code and error message for config resource 2016-07-15 10:15:24 +02:00
config config: validate Kubernetes role correctly. 2016-07-18 22:24:41 +09:00
console_libraries Add blackbox console. 2015-11-01 20:06:52 +00:00
consoles The metrics are no longer ms, we can remove the scaling. 2016-06-29 01:09:24 +01:00
documentation Kubernetes SD: Update example config with TLS options 2016-06-27 14:38:51 +01:00
notifier web: return status code and error message for config resource 2016-07-15 10:15:24 +02:00
promql storage: improve index lookups 2016-07-20 17:35:53 +02:00
retrieval web: return status code and error message for config resource 2016-07-15 10:15:24 +02:00
rules web: return status code and error message for config resource 2016-07-15 10:15:24 +02:00
scripts New release process using docker, circleci and a centralized 2016-04-18 22:41:04 +02:00
storage storage: improve index lookups 2016-07-20 17:35:53 +02:00
template Switch chunk encoding to type 2 where it was hardcoded type 1 before 2016-03-20 23:32:20 +01:00
util Add ServerName into TLS Config 2016-05-26 14:24:49 -07:00
vendor vendor: update prometheus org dependencies 2016-07-04 11:09:06 +02:00
web Merge pull request #1820 from prometheus/console-api 2016-07-18 21:59:21 +01:00
.dockerignore New release process using docker, circleci and a centralized 2016-04-18 22:41:04 +02:00
.gitignore gitignore: clean up 2016-07-04 11:34:33 +02:00
.promu.yml promu: don't build openbsd/arm and mips 2016-06-15 15:41:22 +02:00
.travis.yml *: bump default Go version to 1.6.2 2016-05-02 12:15:41 +02:00
AUTHORS.md Update Fabian's email address 2016-03-24 17:02:57 +01:00
CHANGELOG.md *: cut 1.0.0 2016-07-18 22:38:51 +09:00
circle.yml circle: add tag v-prefix 2016-07-14 11:46:48 +09:00
CONTRIBUTING.md Update CONTRIBUTING.md. 2015-01-22 15:07:20 +01:00
Dockerfile Fix consoles and console_libraries path in Dockerfile. 2016-05-30 01:02:38 +03:00
LICENSE Clean up license issues. 2015-01-21 20:07:45 +01:00
Makefile fix build with multi-part $GOPATH 2016-05-24 16:50:27 -06:00
NOTICE Add support for Zookeeper Serversets for SD. 2015-06-16 11:02:08 +01:00
README.md readme: update debian and container releases 2016-07-01 10:16:49 +02:00
VERSION *: cut 1.0.0 2016-07-18 22:38:51 +09:00

Prometheus Build Status

CircleCI Docker Repository on Quay Docker Pulls

Visit prometheus.io for the full documentation, examples and guides.

Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.

Prometheus' main distinguishing features as compared to other monitoring systems are:

  • a multi-dimensional data model (timeseries defined by metric name and set of key/value dimensions)
  • a flexible query language to leverage this dimensionality
  • no dependency on distributed storage; single server nodes are autonomous
  • timeseries collection happens via a pull model over HTTP
  • pushing timeseries is supported via an intermediary gateway
  • targets are discovered via service discovery or static configuration
  • multiple modes of graphing and dashboarding support
  • support for hierarchical and horizontal federation

Architecture overview

Install

There are various ways of installing Prometheus.

Precompiled binaries

Precompiled binaries for released versions are available in the releases section of the GitHub repository. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.

Debian packages are available.

Container images

Container images are available on Quay.io.

Building from source

To build Prometheus from the source code yourself you need to have a working Go environment with version 1.5 or greater installed.

You can directly use the go tool to download and install the prometheus and promtool binaries into your GOPATH. We use Go 1.5's experimental vendoring feature, so you will also need to set the GO15VENDOREXPERIMENT=1 environment variable in this case:

$ GO15VENDOREXPERIMENT=1 go get github.com/prometheus/prometheus/cmd/...
$ prometheus -config.file=your_config.yml

You can also clone the repository yourself and build using make:

$ mkdir -p $GOPATH/src/github.com/prometheus
$ cd $GOPATH/src/github.com/prometheus
$ git clone https://github.com/prometheus/prometheus.git
$ cd prometheus
$ make build
$ ./prometheus -config.file=your_config.yml

The Makefile provides several targets:

  • build: build the prometheus and promtool binaries
  • test: run the tests
  • format: format the source code
  • vet: check the source code for common errors
  • assets: rebuild the static assets
  • docker: build a docker container for the current HEAD

More information

  • The source code is periodically indexed: Prometheus Core.
  • You will find a Travis CI configuration in .travis.yml.
  • All of the core developers are accessible via the Prometheus Developers Mailinglist and the #prometheus channel on irc.freenode.net.

Contributing

Refer to CONTRIBUTING.md

License

Apache License 2.0, see LICENSE.