Merge branch 'master' into dev-2.0

This commit is contained in:
Fabian Reinartz 2017-04-27 10:19:55 +02:00
commit 73b8ff0ddc
55 changed files with 1460 additions and 358 deletions

View file

@ -3,11 +3,9 @@ sudo: false
language: go language: go
go: go:
- 1.8 - 1.8.x
go_import_path: github.com/prometheus/prometheus go_import_path: github.com/prometheus/prometheus
script: script:
- make check_license style test - make check_license style test

View file

@ -1,3 +1,72 @@
## 1.6.1 / 2017-04-19
* [BUGFIX] Don't panic if storage has no FPs even after initial wait
## 1.6.0 / 2017-04-14
* [CHANGE] Replaced the remote write implementations for various backends by a
generic write interface with example adapter implementation for various
backends. Note that both the previous and the current remote write
implementations are **experimental**.
* [FEATURE] New flag `-storage.local.target-heap-size` to tell Prometheus about
the desired heap size. This deprecates the flags
`-storage.local.memory-chunks` and `-storage.local.max-chunks-to-persist`,
which are kept for backward compatibility.
* [FEATURE] Add `check-metrics` to `promtool` to lint metric names.
* [FEATURE] Add Joyent Triton discovery.
* [FEATURE] `X-Prometheus-Scrape-Timeout-Seconds` header in HTTP scrape
requests.
* [FEATURE] Remote read interface, including example for InfluxDB. **Experimental.**
* [FEATURE] Enable Consul SD to connect via TLS.
* [FEATURE] Marathon SD supports multiple ports.
* [FEATURE] Marathon SD supports bearer token for authentication.
* [FEATURE] Custom timeout for queries.
* [FEATURE] Expose `buildQueryUrl` in `graph.js`.
* [FEATURE] Add `rickshawGraph` property to the graph object in console
templates.
* [FEATURE] New metrics exported by Prometheus itself:
* Summary `prometheus_engine_query_duration_seconds`
* Counter `prometheus_evaluator_iterations_missed_total`
* Counter `prometheus_evaluator_iterations_total`
* Gauge `prometheus_local_storage_open_head_chunks`
* Gauge `prometheus_local_storage_target_heap_size`
* [ENHANCEMENT] Reduce shut-down time by interrupting an ongoing checkpoint
before starting the final checkpoint.
* [ENHANCEMENT] Auto-tweak times between checkpoints to limit time spent in
checkpointing to 50%.
* [ENHANCEMENT] Improved crash recovery deals better with certain index
corruptions.
* [ENHANCEMENT] Graphing deals better with constant time series.
* [ENHANCEMENT] Retry remote writes on recoverable errors.
* [ENHANCEMENT] Evict unused chunk descriptors during crash recovery to limit
memory usage.
* [ENHANCEMENT] Smoother disk usage during series maintenance.
* [ENHANCEMENT] Targets on targets page sorted by instance within a job.
* [ENHANCEMENT] Sort labels in federation.
* [ENHANCEMENT] Set `GOGC=40` by default, which results in much better memory
utilization at the price of slightly higher CPU usage. If `GOGC` is set by
the user, it is still honored as usual.
* [ENHANCEMENT] Close head chunks after being idle for the duration of the
configured staleness delta. This helps to persist and evict head chunk of
stale series more quickly.
* [ENHANCEMENT] Stricter checking of relabel config.
* [ENHANCEMENT] Cache busters for static web content.
* [ENHANCEMENT] Send Prometheus-specific user-agent header during scrapes.
* [ENHANCEMENT] Improved performance of series retention cut-off.
* [ENHANCEMENT] Mitigate impact of non-atomic sample ingestion on
`histogram_quantile` by enforcing buckets to be monotonic.
* [ENHANCEMENT] Released binaries built with Go 1.8.1.
* [BUGFIX] Send `instance=""` with federation if `instance` not set.
* [BUGFIX] Update to new `client_golang` to get rid of unwanted quantile
metrics in summaries.
* [BUGFIX] Introduce several additional guards against data corruption.
* [BUGFIX] Mark storage dirty and increment
`prometheus_local_storage_persist_errors_total` on all relevant errors.
* [BUGFIX] Propagate storage errors as 500 in the HTTP API.
* [BUGFIX] Fix int64 overflow in timestamps in the HTTP API.
* [BUGFIX] Fix deadlock in Zookeeper SD.
* [BUGFIX] Fix fuzzy search problems in the web-UI auto-completion.
## 1.5.2 / 2017-02-10 ## 1.5.2 / 2017-02-10
* [BUGFIX] Fix series corruption in a special case of series maintenance where * [BUGFIX] Fix series corruption in a special case of series maintenance where

View file

@ -1,7 +1,7 @@
Maintainers of this repository with their focus areas: Maintainers of this repository with their focus areas:
* Björn Rabenstein <beorn@soundcloud.com>: Local storage; general code-level issues. * Björn Rabenstein <beorn@soundcloud.com> @beorn7: Local storage; general code-level issues.
* Brian Brazil <brian.brazil@robustperception.io>: Console templates; semantics of PromQL, service discovery, and relabeling. * Brian Brazil <brian.brazil@robustperception.io> @brian-brazil: Console templates; semantics of PromQL, service discovery, and relabeling.
* Fabian Reinartz <fabian.reinartz@coreos.com>: PromQL parsing and evaluation; implementation of retrieval, alert notification, and service discovery. * Fabian Reinartz <fabian.reinartz@coreos.com> @fabxc: PromQL parsing and evaluation; implementation of retrieval, alert notification, and service discovery.
* Julius Volz <julius.volz@gmail.com>: Remote storage integrations; web UI. * Julius Volz <julius.volz@gmail.com> @juliusv: Remote storage integrations; web UI.

View file

@ -37,9 +37,13 @@ check_license:
@./scripts/check_license.sh @./scripts/check_license.sh
# TODO(fabxc): example tests temporarily removed. # TODO(fabxc): example tests temporarily removed.
test: test-short:
@echo ">> running short tests" @echo ">> running short tests"
@$(GO) test -short $(shell $(GO) list ./... | grep -v /vendor/ | grep -v examples) @$(GO) test $(shell $(GO) list ./... | grep -v /vendor/ | grep -v examples)
test:
@echo ">> running all tests"
@$(GO) test $(pkgs)
format: format:
@echo ">> formatting code" @echo ">> formatting code"

View file

@ -81,6 +81,7 @@ The Makefile provides several targets:
* *build*: build the `prometheus` and `promtool` binaries * *build*: build the `prometheus` and `promtool` binaries
* *test*: run the tests * *test*: run the tests
* *test-short*: run the short tests
* *format*: format the source code * *format*: format the source code
* *vet*: check the source code for common errors * *vet*: check the source code for common errors
* *assets*: rebuild the static assets * *assets*: rebuild the static assets

View file

@ -49,6 +49,7 @@ deployment:
owner: prometheus owner: prometheus
commands: commands:
- promu crossbuild tarballs - promu crossbuild tarballs
- promu checksum .tarballs
- promu release .tarballs - promu release .tarballs
- mkdir $CIRCLE_ARTIFACTS/releases/ && cp -a .tarballs/* $CIRCLE_ARTIFACTS/releases/ - mkdir $CIRCLE_ARTIFACTS/releases/ && cp -a .tarballs/* $CIRCLE_ARTIFACTS/releases/
- docker login -e $DOCKER_EMAIL -u $DOCKER_LOGIN -p $DOCKER_PASSWORD - docker login -e $DOCKER_EMAIL -u $DOCKER_LOGIN -p $DOCKER_PASSWORD

View file

@ -24,6 +24,7 @@ import (
"github.com/prometheus/prometheus/config" "github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/promql" "github.com/prometheus/prometheus/promql"
"github.com/prometheus/prometheus/util/cli" "github.com/prometheus/prometheus/util/cli"
"github.com/prometheus/prometheus/util/promlint"
) )
// CheckConfigCmd validates configuration files. // CheckConfigCmd validates configuration files.
@ -182,6 +183,42 @@ func checkRules(t cli.Term, filename string) (int, error) {
return len(rules), nil return len(rules), nil
} }
var checkMetricsUsage = strings.TrimSpace(`
usage: promtool check-metrics
Pass Prometheus metrics over stdin to lint them for consistency and correctness.
examples:
$ cat metrics.prom | promtool check-metrics
$ curl -s http://localhost:9090/metrics | promtool check-metrics
`)
// CheckMetricsCmd performs a linting pass on input metrics.
func CheckMetricsCmd(t cli.Term, args ...string) int {
if len(args) != 0 {
t.Infof(checkMetricsUsage)
return 2
}
l := promlint.New(os.Stdin)
problems, err := l.Lint()
if err != nil {
t.Errorf("error while linting: %v", err)
return 1
}
for _, p := range problems {
t.Errorf("%s: %s", p.Metric, p.Text)
}
if len(problems) > 0 {
return 3
}
return 0
}
// VersionCmd prints the binaries version information. // VersionCmd prints the binaries version information.
func VersionCmd(t cli.Term, _ ...string) int { func VersionCmd(t cli.Term, _ ...string) int {
fmt.Fprintln(os.Stdout, version.Print("promtool")) fmt.Fprintln(os.Stdout, version.Print("promtool"))
@ -201,6 +238,11 @@ func main() {
Run: CheckRulesCmd, Run: CheckRulesCmd,
}) })
app.Register("check-metrics", &cli.Command{
Desc: "validate metrics for correctness",
Run: CheckMetricsCmd,
})
app.Register("version", &cli.Command{ app.Register("version", &cli.Command{
Desc: "print the version of this binary", Desc: "print the version of this binary",
Run: VersionCmd, Run: VersionCmd,

View file

@ -1,20 +1,19 @@
{{/* vim: set ft=html: */}} {{/* vim: set ft=html: */}}
{{/* Load Prometheus console library JS/CSS. Should go in <head> */}} {{/* Load Prometheus console library JS/CSS. Should go in <head> */}}
{{ define "prom_console_head" }} {{ define "prom_console_head" }}
<link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/vendor/rickshaw/rickshaw.min.css?v={{ buildVersion }}"> <link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/vendor/rickshaw/rickshaw.min.css">
<link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/vendor/bootstrap-3.3.1/css/bootstrap.min.css?v={{ buildVersion }}"> <link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/vendor/bootstrap-3.3.1/css/bootstrap.min.css">
<link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/css/prom_console.css?v={{ buildVersion }}"> <link type="text/css" rel="stylesheet" href="{{ pathPrefix }}/static/css/prom_console.css">
<script src="{{ pathPrefix }}/static/vendor/rickshaw/vendor/d3.v3.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/vendor/rickshaw/vendor/d3.v3.js"></script>
<script src="{{ pathPrefix }}/static/vendor/rickshaw/vendor/d3.layout.min.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/vendor/rickshaw/vendor/d3.layout.min.js"></script>
<script src="{{ pathPrefix }}/static/vendor/rickshaw/rickshaw.min.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/vendor/rickshaw/rickshaw.min.js"></script>
<script src="{{ pathPrefix }}/static/vendor/js/jquery.min.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/vendor/js/jquery.min.js"></script>
<script src="{{ pathPrefix }}/static/vendor/bootstrap-3.3.1/js/bootstrap.min.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/vendor/bootstrap-3.3.1/js/bootstrap.min.js"></script>
<script> <script>
var PATH_PREFIX = "{{ pathPrefix }}"; var PATH_PREFIX = "{{ pathPrefix }}";
var BUILD_VERSION = "{{ buildVersion }}";
</script> </script>
<script src="{{ pathPrefix }}/static/js/prom_console.js?v={{ buildVersion }}"></script> <script src="{{ pathPrefix }}/static/js/prom_console.js"></script>
{{ end }} {{ end }}
{{/* Top of all pages. */}} {{/* Top of all pages. */}}

View file

@ -217,11 +217,6 @@ func (ts *TargetSet) UpdateProviders(p map[string]TargetProvider) {
} }
func (ts *TargetSet) updateProviders(ctx context.Context, providers map[string]TargetProvider) { func (ts *TargetSet) updateProviders(ctx context.Context, providers map[string]TargetProvider) {
// Lock for the entire time. This may mean up to 5 seconds until the full initial set
// is retrieved and applied.
// We could release earlier with some tweaks, but this is easier to reason about.
ts.mtx.Lock()
defer ts.mtx.Unlock()
// Stop all previous target providers of the target set. // Stop all previous target providers of the target set.
if ts.cancelProviders != nil { if ts.cancelProviders != nil {
@ -233,7 +228,9 @@ func (ts *TargetSet) updateProviders(ctx context.Context, providers map[string]T
// (Re-)create a fresh tgroups map to not keep stale targets around. We // (Re-)create a fresh tgroups map to not keep stale targets around. We
// will retrieve all targets below anyway, so cleaning up everything is // will retrieve all targets below anyway, so cleaning up everything is
// safe and doesn't inflict any additional cost. // safe and doesn't inflict any additional cost.
ts.mtx.Lock()
ts.tgroups = map[string]*config.TargetGroup{} ts.tgroups = map[string]*config.TargetGroup{}
ts.mtx.Unlock()
for name, prov := range providers { for name, prov := range providers {
wg.Add(1) wg.Add(1)
@ -292,9 +289,6 @@ func (ts *TargetSet) updateProviders(ctx context.Context, providers map[string]T
// update handles a target group update from a target provider identified by the name. // update handles a target group update from a target provider identified by the name.
func (ts *TargetSet) update(name string, tgroup *config.TargetGroup) { func (ts *TargetSet) update(name string, tgroup *config.TargetGroup) {
ts.mtx.Lock()
defer ts.mtx.Unlock()
ts.setTargetGroup(name, tgroup) ts.setTargetGroup(name, tgroup)
select { select {
@ -304,6 +298,9 @@ func (ts *TargetSet) update(name string, tgroup *config.TargetGroup) {
} }
func (ts *TargetSet) setTargetGroup(name string, tg *config.TargetGroup) { func (ts *TargetSet) setTargetGroup(name string, tg *config.TargetGroup) {
ts.mtx.Lock()
defer ts.mtx.Unlock()
if tg == nil { if tg == nil {
return return
} }

View file

@ -32,6 +32,7 @@ const (
tritonLabel = model.MetaLabelPrefix + "triton_" tritonLabel = model.MetaLabelPrefix + "triton_"
tritonLabelMachineId = tritonLabel + "machine_id" tritonLabelMachineId = tritonLabel + "machine_id"
tritonLabelMachineAlias = tritonLabel + "machine_alias" tritonLabelMachineAlias = tritonLabel + "machine_alias"
tritonLabelMachineBrand = tritonLabel + "machine_brand"
tritonLabelMachineImage = tritonLabel + "machine_image" tritonLabelMachineImage = tritonLabel + "machine_image"
tritonLabelServerId = tritonLabel + "server_id" tritonLabelServerId = tritonLabel + "server_id"
namespace = "prometheus" namespace = "prometheus"
@ -59,6 +60,7 @@ type DiscoveryResponse struct {
Containers []struct { Containers []struct {
ServerUUID string `json:"server_uuid"` ServerUUID string `json:"server_uuid"`
VMAlias string `json:"vm_alias"` VMAlias string `json:"vm_alias"`
VMBrand string `json:"vm_brand"`
VMImageUUID string `json:"vm_image_uuid"` VMImageUUID string `json:"vm_image_uuid"`
VMUUID string `json:"vm_uuid"` VMUUID string `json:"vm_uuid"`
} `json:"containers"` } `json:"containers"`
@ -157,6 +159,7 @@ func (d *Discovery) refresh() (tg *config.TargetGroup, err error) {
labels := model.LabelSet{ labels := model.LabelSet{
tritonLabelMachineId: model.LabelValue(container.VMUUID), tritonLabelMachineId: model.LabelValue(container.VMUUID),
tritonLabelMachineAlias: model.LabelValue(container.VMAlias), tritonLabelMachineAlias: model.LabelValue(container.VMAlias),
tritonLabelMachineBrand: model.LabelValue(container.VMBrand),
tritonLabelMachineImage: model.LabelValue(container.VMImageUUID), tritonLabelMachineImage: model.LabelValue(container.VMImageUUID),
tritonLabelServerId: model.LabelValue(container.ServerUUID), tritonLabelServerId: model.LabelValue(container.ServerUUID),
} }

View file

@ -111,12 +111,14 @@ func TestTritonSDRefreshMultipleTargets(t *testing.T) {
{ {
"server_uuid":"44454c4c-5000-104d-8037-b7c04f5a5131", "server_uuid":"44454c4c-5000-104d-8037-b7c04f5a5131",
"vm_alias":"server01", "vm_alias":"server01",
"vm_brand":"lx",
"vm_image_uuid":"7b27a514-89d7-11e6-bee6-3f96f367bee7", "vm_image_uuid":"7b27a514-89d7-11e6-bee6-3f96f367bee7",
"vm_uuid":"ad466fbf-46a2-4027-9b64-8d3cdb7e9072" "vm_uuid":"ad466fbf-46a2-4027-9b64-8d3cdb7e9072"
}, },
{ {
"server_uuid":"a5894692-bd32-4ca1-908a-e2dda3c3a5e6", "server_uuid":"a5894692-bd32-4ca1-908a-e2dda3c3a5e6",
"vm_alias":"server02", "vm_alias":"server02",
"vm_brand":"kvm",
"vm_image_uuid":"a5894692-bd32-4ca1-908a-e2dda3c3a5e6", "vm_image_uuid":"a5894692-bd32-4ca1-908a-e2dda3c3a5e6",
"vm_uuid":"7b27a514-89d7-11e6-bee6-3f96f367bee7" "vm_uuid":"7b27a514-89d7-11e6-bee6-3f96f367bee7"
}] }]

View file

@ -1,4 +1,4 @@
## Generic Remote Storage Example ## Remote Write Adapter Example
This is a simple example of how to write a server to This is a simple example of how to write a server to
receive samples from the remote storage output. receive samples from the remote storage output.
@ -7,7 +7,7 @@ To use it:
``` ```
go build go build
./example_receiver ./example_write_adapter
``` ```
...and then add the following to your `prometheus.yml`: ...and then add the following to your `prometheus.yml`:

View file

@ -0,0 +1,55 @@
# Remote storage adapter
This is a write adapter that receives samples via Prometheus's remote write
protocol and stores them in Graphite, InfluxDB, or OpenTSDB. It is meant as a
replacement for the built-in specific remote storage implementations that have
been removed from Prometheus.
For InfluxDB, this binary is also a read adapter that supports reading back
data through Prometheus via Prometheus's remote read protocol.
## Building
```
go build
```
## Running
Graphite example:
```
./remote_storage_adapter -graphite-address=localhost:8080
```
OpenTSDB example:
```
./remote_storage_adapter -opentsdb-url=http://localhost:8081/
```
InfluxDB example:
```
./remote_storage_adapter -influxdb-url=http://localhost:8086/ -influxdb.database=prometheus -influxdb.retention-policy=autogen
```
To show all flags:
```
./remote_storage_adapter -h
```
## Configuring Prometheus
To configure Prometheus to send samples to this binary, add the following to your `prometheus.yml`:
```yaml
# Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).
remote_write:
- url: "http://localhost:9201/write"
# Remote read configuration (for InfluxDB only at the moment).
remote_read:
- url: "http://localhost:9201/read"
```

View file

@ -124,10 +124,12 @@ func (c *Client) Read(req *remote.ReadRequest) (*remote.ReadResponse, error) {
} }
resp := remote.ReadResponse{ resp := remote.ReadResponse{
Timeseries: make([]*remote.TimeSeries, 0, len(labelsToSeries)), Results: []*remote.QueryResult{
{Timeseries: make([]*remote.TimeSeries, 0, len(labelsToSeries))},
},
} }
for _, ts := range labelsToSeries { for _, ts := range labelsToSeries {
resp.Timeseries = append(resp.Timeseries, ts) resp.Results[0].Timeseries = append(resp.Results[0].Timeseries, ts)
} }
return &resp, nil return &resp, nil
} }

View file

@ -33,9 +33,9 @@ import (
influx "github.com/influxdata/influxdb/client/v2" influx "github.com/influxdata/influxdb/client/v2"
"github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_bridge/graphite" "github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_adapter/graphite"
"github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_bridge/influxdb" "github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_adapter/influxdb"
"github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_bridge/opentsdb" "github.com/prometheus/prometheus/documentation/examples/remote_storage/remote_storage_adapter/opentsdb"
"github.com/prometheus/prometheus/storage/remote" "github.com/prometheus/prometheus/storage/remote"
) )

View file

@ -1,55 +0,0 @@
# Remote storage bridge
This is a bridge that receives samples via Prometheus's remote write
protocol and stores them in Graphite, InfluxDB, or OpenTSDB. It is meant
as a replacement for the built-in specific remote storage implementations
that have been removed from Prometheus.
For InfluxDB, this bridge also supports reading back data through
Prometheus via Prometheus's remote read protocol.
## Building
```
go build
```
## Running
Graphite example:
```
./remote_storage_bridge -graphite-address=localhost:8080
```
OpenTSDB example:
```
./remote_storage_bridge -opentsdb-url=http://localhost:8081/
```
InfluxDB example:
```
./remote_storage_bridge -influxdb-url=http://localhost:8086/ -influxdb.database=prometheus -influxdb.retention-policy=autogen
```
To show all flags:
```
./remote_storage_bridge -h
```
## Configuring Prometheus
To configure Prometheus to send samples to this bridge, add the following to your `prometheus.yml`:
```yaml
# Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).
remote_write:
- url: "http://localhost:9201/write"
# Remote read configuration (for InfluxDB only at the moment).
remote_read:
- url: "http://localhost:9201/read"
```

View file

@ -369,13 +369,13 @@ func (n *Notifier) setMore() {
} }
} }
// Alertmanagers returns a list Alertmanager URLs. // Alertmanagers returns a slice of Alertmanager URLs.
func (n *Notifier) Alertmanagers() []string { func (n *Notifier) Alertmanagers() []*url.URL {
n.mtx.RLock() n.mtx.RLock()
amSets := n.alertmanagers amSets := n.alertmanagers
n.mtx.RUnlock() n.mtx.RUnlock()
var res []string var res []*url.URL
for _, ams := range amSets { for _, ams := range amSets {
ams.mtx.RLock() ams.mtx.RLock()
@ -417,7 +417,7 @@ func (n *Notifier) sendAll(alerts ...*Alert) bool {
defer cancel() defer cancel()
go func(am alertmanager) { go func(am alertmanager) {
u := am.url() u := am.url().String()
if err := n.sendOne(ctx, ams.client, u, b); err != nil { if err := n.sendOne(ctx, ams.client, u, b); err != nil {
log.With("alertmanager", u).With("count", len(alerts)).Errorf("Error sending alerts: %s", err) log.With("alertmanager", u).With("count", len(alerts)).Errorf("Error sending alerts: %s", err)
@ -465,20 +465,19 @@ func (n *Notifier) Stop() {
// alertmanager holds Alertmanager endpoint information. // alertmanager holds Alertmanager endpoint information.
type alertmanager interface { type alertmanager interface {
url() string url() *url.URL
} }
type alertmanagerLabels struct{ labels.Labels } type alertmanagerLabels struct{ labels.Labels }
const pathLabel = "__alerts_path__" const pathLabel = "__alerts_path__"
func (a alertmanagerLabels) url() string { func (a alertmanagerLabels) url() *url.URL {
u := &url.URL{ return &url.URL{
Scheme: a.Get(model.SchemeLabel), Scheme: a.Get(model.SchemeLabel),
Host: a.Get(model.AddressLabel), Host: a.Get(model.AddressLabel),
Path: a.Get(pathLabel), Path: a.Get(pathLabel),
} }
return u.String()
} }
// alertmanagerSet contains a set of Alertmanagers discovered via a group of service // alertmanagerSet contains a set of Alertmanagers discovered via a group of service
@ -529,7 +528,7 @@ func (s *alertmanagerSet) Sync(tgs []*config.TargetGroup) {
seen := map[string]struct{}{} seen := map[string]struct{}{}
for _, am := range all { for _, am := range all {
us := am.url() us := am.url().String()
if _, ok := seen[us]; ok { if _, ok := seen[us]; ok {
continue continue
} }

View file

@ -19,6 +19,7 @@ import (
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"net/url"
"testing" "testing"
"time" "time"
@ -388,6 +389,10 @@ type alertmanagerMock struct {
urlf func() string urlf func() string
} }
func (a alertmanagerMock) url() string { func (a alertmanagerMock) url() *url.URL {
return a.urlf() u, err := url.Parse(a.urlf())
if err != nil {
panic(err)
}
return u
} }

View file

@ -113,6 +113,9 @@ type (
ErrQueryTimeout string ErrQueryTimeout string
// ErrQueryCanceled is returned if a query was canceled during processing. // ErrQueryCanceled is returned if a query was canceled during processing.
ErrQueryCanceled string ErrQueryCanceled string
// ErrStorage is returned if an error was encountered in the storage layer
// during query handling.
ErrStorage error
) )
func (e ErrQueryTimeout) Error() string { return fmt.Sprintf("query timed out in %s", string(e)) } func (e ErrQueryTimeout) Error() string { return fmt.Sprintf("query timed out in %s", string(e)) }

View file

@ -83,6 +83,8 @@ func bucketQuantile(q float64, buckets buckets) float64 {
return math.NaN() return math.NaN()
} }
ensureMonotonic(buckets)
rank := q * buckets[len(buckets)-1].count rank := q * buckets[len(buckets)-1].count
b := sort.Search(len(buckets)-1, func(i int) bool { return buckets[i].count >= rank }) b := sort.Search(len(buckets)-1, func(i int) bool { return buckets[i].count >= rank })
@ -105,7 +107,52 @@ func bucketQuantile(q float64, buckets buckets) float64 {
return bucketStart + (bucketEnd-bucketStart)*float64(rank/count) return bucketStart + (bucketEnd-bucketStart)*float64(rank/count)
} }
// qauntile calculates the given quantile of a Vector of samples. // The assumption that bucket counts increase monotonically with increasing
// upperBound may be violated during:
//
// * Recording rule evaluation of histogram_quantile, especially when rate()
// has been applied to the underlying bucket timeseries.
// * Evaluation of histogram_quantile computed over federated bucket
// timeseries, especially when rate() has been applied.
//
// This is because scraped data is not made available to rule evaluation or
// federation atomically, so some buckets are computed with data from the
// most recent scrapes, but the other buckets are missing data from the most
// recent scrape.
//
// Monotonicity is usually guaranteed because if a bucket with upper bound
// u1 has count c1, then any bucket with a higher upper bound u > u1 must
// have counted all c1 observations and perhaps more, so that c >= c1.
//
// Randomly interspersed partial sampling breaks that guarantee, and rate()
// exacerbates it. Specifically, suppose bucket le=1000 has a count of 10 from
// 4 samples but the bucket with le=2000 has a count of 7 from 3 samples. The
// monotonicity is broken. It is exacerbated by rate() because under normal
// operation, cumulative counting of buckets will cause the bucket counts to
// diverge such that small differences from missing samples are not a problem.
// rate() removes this divergence.)
//
// bucketQuantile depends on that monotonicity to do a binary search for the
// bucket with the φ-quantile count, so breaking the monotonicity
// guarantee causes bucketQuantile() to return undefined (nonsense) results.
//
// As a somewhat hacky solution until ingestion is atomic per scrape, we
// calculate the "envelope" of the histogram buckets, essentially removing
// any decreases in the count between successive buckets.
func ensureMonotonic(buckets buckets) {
max := buckets[0].count
for i := range buckets[1:] {
switch {
case buckets[i].count > max:
max = buckets[i].count
case buckets[i].count < max:
buckets[i].count = max
}
}
}
// qauntile calculates the given quantile of a vector of samples.
// //
// The Vector will be sorted. // The Vector will be sorted.
// If 'values' has zero elements, NaN is returned. // If 'values' has zero elements, NaN is returned.

View file

@ -139,3 +139,21 @@ eval instant at 50m histogram_quantile(0.5, rate(request_duration_seconds_bucket
{instance="ins2", job="job1"} 0.13333333333333333 {instance="ins2", job="job1"} 0.13333333333333333
{instance="ins1", job="job2"} 0.1 {instance="ins1", job="job2"} 0.1
{instance="ins2", job="job2"} 0.11666666666666667 {instance="ins2", job="job2"} 0.11666666666666667
# A histogram with nonmonotonic bucket counts. This may happen when recording
# rule evaluation or federation races scrape ingestion, causing some buckets
# counts to be derived from fewer samples. The wrong answer we want to avoid
# is for histogram_quantile(0.99, nonmonotonic_bucket) to return ~1000 instead
# of 1.
load 5m
nonmonotonic_bucket{le="0.1"} 0+1x10
nonmonotonic_bucket{le="1"} 0+9x10
nonmonotonic_bucket{le="10"} 0+8x10
nonmonotonic_bucket{le="100"} 0+8x10
nonmonotonic_bucket{le="1000"} 0+9x10
nonmonotonic_bucket{le="+Inf"} 0+9x10
# Nonmonotonic buckets
eval instant at 50m histogram_quantile(0.99, nonmonotonic_bucket)
{} 0.989875

View file

@ -184,7 +184,7 @@ func (sp *scrapePool) reload(cfg *config.ScrapeConfig) {
for fp, oldLoop := range sp.loops { for fp, oldLoop := range sp.loops {
var ( var (
t = sp.targets[fp] t = sp.targets[fp]
s = &targetScraper{Target: t, client: sp.client} s = &targetScraper{Target: t, client: sp.client, timeout: timeout}
newLoop = sp.newLoop(sp.ctx, s, newLoop = sp.newLoop(sp.ctx, s,
func() storage.Appender { func() storage.Appender {
return sp.sampleAppender(t) return sp.sampleAppender(t)
@ -253,7 +253,7 @@ func (sp *scrapePool) sync(targets []*Target) {
uniqueTargets[hash] = struct{}{} uniqueTargets[hash] = struct{}{}
if _, ok := sp.targets[hash]; !ok { if _, ok := sp.targets[hash]; !ok {
s := &targetScraper{Target: t, client: sp.client} s := &targetScraper{Target: t, client: sp.client, timeout: timeout}
l := sp.newLoop(sp.ctx, s, l := sp.newLoop(sp.ctx, s,
func() storage.Appender { func() storage.Appender {
return sp.sampleAppender(t) return sp.sampleAppender(t)
@ -356,6 +356,7 @@ type targetScraper struct {
client *http.Client client *http.Client
req *http.Request req *http.Request
timeout time.Duration
gzipr *gzip.Reader gzipr *gzip.Reader
buf *bufio.Reader buf *bufio.Reader
@ -372,13 +373,13 @@ func (s *targetScraper) scrape(ctx context.Context, w io.Writer) error {
return err return err
} }
// Disable accept header to always negotiate for text format. // Disable accept header to always negotiate for text format.
// req.Header.Add("Accept", acceptHeader) req.Header.Add("Accept", acceptHeader)
req.Header.Add("Accept-Encoding", "gzip") req.Header.Add("Accept-Encoding", "gzip")
req.Header.Set("User-Agent", userAgentHeader) req.Header.Set("User-Agent", userAgentHeader)
req.Header.Set("X-Prometheus-Scrape-Timeout-Seconds", fmt.Sprintf("%f", s.timeout.Seconds()))
s.req = req s.req = req
} }
resp, err := ctxhttp.Do(ctx, s.client, s.req) resp, err := ctxhttp.Do(ctx, s.client, s.req)
if err != nil { if err != nil {
return err return err

View file

@ -435,8 +435,20 @@ func TestScrapeLoopRun(t *testing.T) {
} }
func TestTargetScraperScrapeOK(t *testing.T) { func TestTargetScraperScrapeOK(t *testing.T) {
const (
configTimeout = 1500 * time.Millisecond
expectedTimeout = "1.500000"
)
server := httptest.NewServer( server := httptest.NewServer(
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
timeout := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds")
if timeout != expectedTimeout {
t.Errorf("Scrape timeout did not match expected timeout")
t.Errorf("Expected: %v", expectedTimeout)
t.Fatalf("Got: %v", timeout)
}
w.Header().Set("Content-Type", `text/plain; version=0.0.4`) w.Header().Set("Content-Type", `text/plain; version=0.0.4`)
w.Write([]byte("metric_a 1\nmetric_b 2\n")) w.Write([]byte("metric_a 1\nmetric_b 2\n"))
}), }),
@ -456,6 +468,7 @@ func TestTargetScraperScrapeOK(t *testing.T) {
), ),
}, },
client: http.DefaultClient, client: http.DefaultClient,
timeout: configTimeout,
} }
var buf bytes.Buffer var buf bytes.Buffer

268
util/promlint/promlint.go Normal file
View file

@ -0,0 +1,268 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package promlint provides a linter for Prometheus metrics.
package promlint
import (
"fmt"
"io"
"sort"
"strings"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
)
// A Linter is a Prometheus metrics linter. It identifies issues with metric
// names, types, and metadata, and reports them to the caller.
type Linter struct {
r io.Reader
}
// A Problem is an issue detected by a Linter.
type Problem struct {
// The name of the metric indicated by this Problem.
Metric string
// A description of the issue for this Problem.
Text string
}
// problems is a slice of Problems with a helper method to easily append
// additional Problems to the slice.
type problems []Problem
// Add appends a new Problem to the slice for the specified metric, with
// the specified issue text.
func (p *problems) Add(mf dto.MetricFamily, text string) {
*p = append(*p, Problem{
Metric: mf.GetName(),
Text: text,
})
}
// New creates a new Linter that reads an input stream of Prometheus metrics.
// Only the text exposition format is supported.
func New(r io.Reader) *Linter {
return &Linter{
r: r,
}
}
// Lint performs a linting pass, returning a slice of Problems indicating any
// issues found in the metrics stream. The slice is sorted by metric name
// and issue description.
func (l *Linter) Lint() ([]Problem, error) {
// TODO(mdlayher): support for protobuf exposition format?
d := expfmt.NewDecoder(l.r, expfmt.FmtText)
var problems []Problem
var mf dto.MetricFamily
for {
if err := d.Decode(&mf); err != nil {
if err == io.EOF {
break
}
return nil, err
}
problems = append(problems, lint(mf)...)
}
// Ensure deterministic output.
sort.SliceStable(problems, func(i, j int) bool {
if problems[i].Metric < problems[j].Metric {
return true
}
return problems[i].Text < problems[j].Text
})
return problems, nil
}
// lint is the entry point for linting a single metric.
func lint(mf dto.MetricFamily) []Problem {
fns := []func(mf dto.MetricFamily) []Problem{
lintHelp,
lintMetricUnits,
lintCounter,
lintHistogramSummaryReserved,
}
var problems []Problem
for _, fn := range fns {
problems = append(problems, fn(mf)...)
}
// TODO(mdlayher): lint rules for specific metrics types.
return problems
}
// lintHelp detects issues related to the help text for a metric.
func lintHelp(mf dto.MetricFamily) []Problem {
var problems problems
// Expect all metrics to have help text available.
if mf.Help == nil {
problems.Add(mf, "no help text")
}
return problems
}
// lintMetricUnits detects issues with metric unit names.
func lintMetricUnits(mf dto.MetricFamily) []Problem {
var problems problems
unit, base, ok := metricUnits(*mf.Name)
if !ok {
// No known units detected.
return nil
}
// Unit is already a base unit.
if unit == base {
return nil
}
problems.Add(mf, fmt.Sprintf("use base unit %q instead of %q", base, unit))
return problems
}
// lintCounter detects issues specific to counters, as well as patterns that should
// only be used with counters.
func lintCounter(mf dto.MetricFamily) []Problem {
var problems problems
isCounter := mf.GetType() == dto.MetricType_COUNTER
isUntyped := mf.GetType() == dto.MetricType_UNTYPED
hasTotalSuffix := strings.HasSuffix(mf.GetName(), "_total")
switch {
case isCounter && !hasTotalSuffix:
problems.Add(mf, `counter metrics should have "_total" suffix`)
case !isUntyped && !isCounter && hasTotalSuffix:
problems.Add(mf, `non-counter metrics should not have "_total" suffix`)
}
return problems
}
// lintHistogramSummaryReserved detects when other types of metrics use names or labels
// reserved for use by histograms and/or summaries.
func lintHistogramSummaryReserved(mf dto.MetricFamily) []Problem {
// These rules do not apply to untyped metrics.
t := mf.GetType()
if t == dto.MetricType_UNTYPED {
return nil
}
var problems problems
isHistogram := t == dto.MetricType_HISTOGRAM
isSummary := t == dto.MetricType_SUMMARY
n := mf.GetName()
if !isHistogram && strings.HasSuffix(n, "_bucket") {
problems.Add(mf, `non-histogram metrics should not have "_bucket" suffix`)
}
if !isHistogram && !isSummary && strings.HasSuffix(n, "_count") {
problems.Add(mf, `non-histogram and non-summary metrics should not have "_count" suffix`)
}
if !isHistogram && !isSummary && strings.HasSuffix(n, "_sum") {
problems.Add(mf, `non-histogram and non-summary metrics should not have "_sum" suffix`)
}
for _, m := range mf.GetMetric() {
for _, l := range m.GetLabel() {
ln := l.GetName()
if !isHistogram && ln == "le" {
problems.Add(mf, `non-histogram metrics should not have "le" label`)
}
if !isSummary && ln == "quantile" {
problems.Add(mf, `non-summary metrics should not have "quantile" label`)
}
}
}
return problems
}
// metricUnits attempts to detect known unit types used as part of a metric name,
// e.g. "foo_bytes_total" or "bar_baz_milligrams".
func metricUnits(m string) (unit string, base string, ok bool) {
ss := strings.Split(m, "_")
for _, u := range baseUnits {
// Also check for "no prefix".
for _, p := range append(unitPrefixes, "") {
for _, s := range ss {
// Attempt to explicitly match a known unit with a known prefix,
// as some words may look like "units" when matching suffix.
//
// As an example, "thermometers" should not match "meters", but
// "kilometers" should.
if s == p+u {
return p + u, u, true
}
}
}
}
return "", "", false
}
// Units and their possible prefixes recognized by this library. More can be
// added over time as needed.
var (
baseUnits = []string{
"amperes",
"bytes",
"candela",
"grams",
"kelvin", // Both plural and non-plural form allowed.
"kelvins",
"meters", // Both American and international spelling permitted.
"metres",
"moles",
"seconds",
}
unitPrefixes = []string{
"pico",
"nano",
"micro",
"milli",
"centi",
"deci",
"deca",
"hecto",
"kilo",
"kibi",
"mega",
"mibi",
"giga",
"gibi",
"tera",
"tebi",
"peta",
"pebi",
}
)

View file

@ -0,0 +1,497 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package promlint_test
import (
"reflect"
"strings"
"testing"
"github.com/prometheus/prometheus/util/promlint"
)
func TestLintNoHelpText(t *testing.T) {
const msg = "no help text"
tests := []struct {
name string
in string
problems []promlint.Problem
}{
{
name: "no help",
in: `
# TYPE go_goroutines gauge
go_goroutines 24
`,
problems: []promlint.Problem{{
Metric: "go_goroutines",
Text: msg,
}},
},
{
name: "empty help",
in: `
# HELP go_goroutines
# TYPE go_goroutines gauge
go_goroutines 24
`,
problems: []promlint.Problem{{
Metric: "go_goroutines",
Text: msg,
}},
},
{
name: "no help and empty help",
in: `
# HELP go_goroutines
# TYPE go_goroutines gauge
go_goroutines 24
# TYPE go_threads gauge
go_threads 10
`,
problems: []promlint.Problem{
{
Metric: "go_goroutines",
Text: msg,
},
{
Metric: "go_threads",
Text: msg,
},
},
},
{
name: "OK",
in: `
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 24
`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
l := promlint.New(strings.NewReader(tt.in))
problems, err := l.Lint()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if want, got := tt.problems, problems; !reflect.DeepEqual(want, got) {
t.Fatalf("unexpected problems:\n- want: %v\n- got: %v",
want, got)
}
})
}
}
func TestLintMetricUnits(t *testing.T) {
tests := []struct {
name string
in string
problems []promlint.Problem
}{
{
name: "amperes",
in: `
# HELP x_milliamperes Test metric.
# TYPE x_milliamperes untyped
x_milliamperes 10
`,
problems: []promlint.Problem{{
Metric: "x_milliamperes",
Text: `use base unit "amperes" instead of "milliamperes"`,
}},
},
{
name: "bytes",
in: `
# HELP x_gigabytes Test metric.
# TYPE x_gigabytes untyped
x_gigabytes 10
`,
problems: []promlint.Problem{{
Metric: "x_gigabytes",
Text: `use base unit "bytes" instead of "gigabytes"`,
}},
},
{
name: "candela",
in: `
# HELP x_kilocandela Test metric.
# TYPE x_kilocandela untyped
x_kilocandela 10
`,
problems: []promlint.Problem{{
Metric: "x_kilocandela",
Text: `use base unit "candela" instead of "kilocandela"`,
}},
},
{
name: "grams",
in: `
# HELP x_kilograms Test metric.
# TYPE x_kilograms untyped
x_kilograms 10
`,
problems: []promlint.Problem{{
Metric: "x_kilograms",
Text: `use base unit "grams" instead of "kilograms"`,
}},
},
{
name: "kelvin",
in: `
# HELP x_nanokelvin Test metric.
# TYPE x_nanokelvin untyped
x_nanokelvin 10
`,
problems: []promlint.Problem{{
Metric: "x_nanokelvin",
Text: `use base unit "kelvin" instead of "nanokelvin"`,
}},
},
{
name: "kelvins",
in: `
# HELP x_nanokelvins Test metric.
# TYPE x_nanokelvins untyped
x_nanokelvins 10
`,
problems: []promlint.Problem{{
Metric: "x_nanokelvins",
Text: `use base unit "kelvins" instead of "nanokelvins"`,
}},
},
{
name: "meters",
in: `
# HELP x_kilometers Test metric.
# TYPE x_kilometers untyped
x_kilometers 10
`,
problems: []promlint.Problem{{
Metric: "x_kilometers",
Text: `use base unit "meters" instead of "kilometers"`,
}},
},
{
name: "metres",
in: `
# HELP x_kilometres Test metric.
# TYPE x_kilometres untyped
x_kilometres 10
`,
problems: []promlint.Problem{{
Metric: "x_kilometres",
Text: `use base unit "metres" instead of "kilometres"`,
}},
},
{
name: "moles",
in: `
# HELP x_picomoles Test metric.
# TYPE x_picomoles untyped
x_picomoles 10
`,
problems: []promlint.Problem{{
Metric: "x_picomoles",
Text: `use base unit "moles" instead of "picomoles"`,
}},
},
{
name: "seconds",
in: `
# HELP x_microseconds Test metric.
# TYPE x_microseconds untyped
x_microseconds 10
`,
problems: []promlint.Problem{{
Metric: "x_microseconds",
Text: `use base unit "seconds" instead of "microseconds"`,
}},
},
{
name: "OK",
in: `
# HELP thermometers_kelvin Test metric with name that looks like "meters".
# TYPE thermometers_kelvin untyped
thermometers_kelvin 0
`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
l := promlint.New(strings.NewReader(tt.in))
problems, err := l.Lint()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if want, got := tt.problems, problems; !reflect.DeepEqual(want, got) {
t.Fatalf("unexpected problems:\n- want: %v\n- got: %v",
want, got)
}
})
}
}
func TestLintCounter(t *testing.T) {
tests := []struct {
name string
in string
problems []promlint.Problem
}{
{
name: "counter without _total suffix",
in: `
# HELP x_bytes Test metric.
# TYPE x_bytes counter
x_bytes 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes",
Text: `counter metrics should have "_total" suffix`,
}},
},
{
name: "gauge with _total suffix",
in: `
# HELP x_bytes_total Test metric.
# TYPE x_bytes_total gauge
x_bytes_total 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes_total",
Text: `non-counter metrics should not have "_total" suffix`,
}},
},
{
name: "counter with _total suffix",
in: `
# HELP x_bytes_total Test metric.
# TYPE x_bytes_total counter
x_bytes_total 10
`,
},
{
name: "gauge without _total suffix",
in: `
# HELP x_bytes Test metric.
# TYPE x_bytes gauge
x_bytes 10
`,
},
{
name: "untyped with _total suffix",
in: `
# HELP x_bytes_total Test metric.
# TYPE x_bytes_total untyped
x_bytes_total 10
`,
},
{
name: "untyped without _total suffix",
in: `
# HELP x_bytes Test metric.
# TYPE x_bytes untyped
x_bytes 10
`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
l := promlint.New(strings.NewReader(tt.in))
problems, err := l.Lint()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if want, got := tt.problems, problems; !reflect.DeepEqual(want, got) {
t.Fatalf("unexpected problems:\n- want: %v\n- got: %v",
want, got)
}
})
}
}
func TestLintHistogramSummaryReserved(t *testing.T) {
tests := []struct {
name string
in string
problems []promlint.Problem
}{
{
name: "gauge with _bucket suffix",
in: `
# HELP x_bytes_bucket Test metric.
# TYPE x_bytes_bucket gauge
x_bytes_bucket 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes_bucket",
Text: `non-histogram metrics should not have "_bucket" suffix`,
}},
},
{
name: "gauge with _count suffix",
in: `
# HELP x_bytes_count Test metric.
# TYPE x_bytes_count gauge
x_bytes_count 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes_count",
Text: `non-histogram and non-summary metrics should not have "_count" suffix`,
}},
},
{
name: "gauge with _sum suffix",
in: `
# HELP x_bytes_sum Test metric.
# TYPE x_bytes_sum gauge
x_bytes_sum 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes_sum",
Text: `non-histogram and non-summary metrics should not have "_sum" suffix`,
}},
},
{
name: "gauge with le label",
in: `
# HELP x_bytes Test metric.
# TYPE x_bytes gauge
x_bytes{le="1"} 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes",
Text: `non-histogram metrics should not have "le" label`,
}},
},
{
name: "gauge with quantile label",
in: `
# HELP x_bytes Test metric.
# TYPE x_bytes gauge
x_bytes{quantile="1"} 10
`,
problems: []promlint.Problem{{
Metric: "x_bytes",
Text: `non-summary metrics should not have "quantile" label`,
}},
},
{
name: "histogram with quantile label",
in: `
# HELP tsdb_compaction_duration Duration of compaction runs.
# TYPE tsdb_compaction_duration histogram
tsdb_compaction_duration_bucket{le="0.005",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.01",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.025",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.05",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.1",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.25",quantile="0.01"} 0
tsdb_compaction_duration_bucket{le="0.5",quantile="0.01"} 57
tsdb_compaction_duration_bucket{le="1",quantile="0.01"} 68
tsdb_compaction_duration_bucket{le="2.5",quantile="0.01"} 69
tsdb_compaction_duration_bucket{le="5",quantile="0.01"} 69
tsdb_compaction_duration_bucket{le="10",quantile="0.01"} 69
tsdb_compaction_duration_bucket{le="+Inf",quantile="0.01"} 69
tsdb_compaction_duration_sum 28.740810936000006
tsdb_compaction_duration_count 69
`,
problems: []promlint.Problem{{
Metric: "tsdb_compaction_duration",
Text: `non-summary metrics should not have "quantile" label`,
}},
},
{
name: "summary with le label",
in: `
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0",le="0.01"} 4.2365e-05
go_gc_duration_seconds{quantile="0.25",le="0.01"} 8.1492e-05
go_gc_duration_seconds{quantile="0.5",le="0.01"} 0.000100656
go_gc_duration_seconds{quantile="0.75",le="0.01"} 0.000113913
go_gc_duration_seconds{quantile="1",le="0.01"} 0.021754305
go_gc_duration_seconds_sum 1.769429004
go_gc_duration_seconds_count 5962
`,
problems: []promlint.Problem{{
Metric: "go_gc_duration_seconds",
Text: `non-histogram metrics should not have "le" label`,
}},
},
{
name: "histogram OK",
in: `
# HELP tsdb_compaction_duration Duration of compaction runs.
# TYPE tsdb_compaction_duration histogram
tsdb_compaction_duration_bucket{le="0.005"} 0
tsdb_compaction_duration_bucket{le="0.01"} 0
tsdb_compaction_duration_bucket{le="0.025"} 0
tsdb_compaction_duration_bucket{le="0.05"} 0
tsdb_compaction_duration_bucket{le="0.1"} 0
tsdb_compaction_duration_bucket{le="0.25"} 0
tsdb_compaction_duration_bucket{le="0.5"} 57
tsdb_compaction_duration_bucket{le="1"} 68
tsdb_compaction_duration_bucket{le="2.5"} 69
tsdb_compaction_duration_bucket{le="5"} 69
tsdb_compaction_duration_bucket{le="10"} 69
tsdb_compaction_duration_bucket{le="+Inf"} 69
tsdb_compaction_duration_sum 28.740810936000006
tsdb_compaction_duration_count 69
`,
},
{
name: "summary OK",
in: `
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.2365e-05
go_gc_duration_seconds{quantile="0.25"} 8.1492e-05
go_gc_duration_seconds{quantile="0.5"} 0.000100656
go_gc_duration_seconds{quantile="0.75"} 0.000113913
go_gc_duration_seconds{quantile="1"} 0.021754305
go_gc_duration_seconds_sum 1.769429004
go_gc_duration_seconds_count 5962
`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
l := promlint.New(strings.NewReader(tt.in))
problems, err := l.Lint()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if want, got := tt.problems, problems; !reflect.DeepEqual(want, got) {
t.Fatalf("unexpected problems:\n- want: %v\n- got: %v",
want, got)
}
})
}
}

View file

@ -30,16 +30,8 @@ type Counter interface {
Metric Metric
Collector Collector
// Set is used to set the Counter to an arbitrary value. It is only used // Inc increments the counter by 1. Use Add to increment it by arbitrary
// if you have to transfer a value from an external counter into this // non-negative values.
// Prometheus metric. Do not use it for regular handling of a
// Prometheus counter (as it can be used to break the contract of
// monotonically increasing values).
//
// Deprecated: Use NewConstMetric to create a counter for an external
// value. A Counter should never be set.
Set(float64)
// Inc increments the counter by 1.
Inc() Inc()
// Add adds the given value to the counter. It panics if the value is < // Add adds the given value to the counter. It panics if the value is <
// 0. // 0.

View file

@ -16,20 +16,15 @@ package prometheus
import ( import (
"errors" "errors"
"fmt" "fmt"
"regexp"
"sort" "sort"
"strings" "strings"
"github.com/golang/protobuf/proto" "github.com/golang/protobuf/proto"
"github.com/prometheus/common/model"
dto "github.com/prometheus/client_model/go" dto "github.com/prometheus/client_model/go"
) )
var (
metricNameRE = regexp.MustCompile(`^[a-zA-Z_][a-zA-Z0-9_:]*$`)
labelNameRE = regexp.MustCompile("^[a-zA-Z_][a-zA-Z0-9_]*$")
)
// reservedLabelPrefix is a prefix which is not legal in user-supplied // reservedLabelPrefix is a prefix which is not legal in user-supplied
// label names. // label names.
const reservedLabelPrefix = "__" const reservedLabelPrefix = "__"
@ -78,7 +73,7 @@ type Desc struct {
// Help string. Each Desc with the same fqName must have the same // Help string. Each Desc with the same fqName must have the same
// dimHash. // dimHash.
dimHash uint64 dimHash uint64
// err is an error that occured during construction. It is reported on // err is an error that occurred during construction. It is reported on
// registration time. // registration time.
err error err error
} }
@ -103,7 +98,7 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
d.err = errors.New("empty help string") d.err = errors.New("empty help string")
return d return d
} }
if !metricNameRE.MatchString(fqName) { if !model.IsValidMetricName(model.LabelValue(fqName)) {
d.err = fmt.Errorf("%q is not a valid metric name", fqName) d.err = fmt.Errorf("%q is not a valid metric name", fqName)
return d return d
} }
@ -200,6 +195,6 @@ func (d *Desc) String() string {
} }
func checkLabelName(l string) bool { func checkLabelName(l string) bool {
return labelNameRE.MatchString(l) && return model.LabelName(l).IsValid() &&
!strings.HasPrefix(l, reservedLabelPrefix) !strings.HasPrefix(l, reservedLabelPrefix)
} }

View file

@ -17,7 +17,7 @@
// Pushgateway (package push). // Pushgateway (package push).
// //
// All exported functions and methods are safe to be used concurrently unless // All exported functions and methods are safe to be used concurrently unless
//specified otherwise. // specified otherwise.
// //
// A Basic Example // A Basic Example
// //
@ -59,7 +59,7 @@
// // The Handler function provides a default handler to expose metrics // // The Handler function provides a default handler to expose metrics
// // via an HTTP server. "/metrics" is the usual endpoint for that. // // via an HTTP server. "/metrics" is the usual endpoint for that.
// http.Handle("/metrics", promhttp.Handler()) // http.Handle("/metrics", promhttp.Handler())
// http.ListenAndServe(":8080", nil) // log.Fatal(http.ListenAndServe(":8080", nil))
// } // }
// //
// //
@ -69,7 +69,7 @@
// Metrics // Metrics
// //
// The number of exported identifiers in this package might appear a bit // The number of exported identifiers in this package might appear a bit
// overwhelming. Hovever, in addition to the basic plumbing shown in the example // overwhelming. However, in addition to the basic plumbing shown in the example
// above, you only need to understand the different metric types and their // above, you only need to understand the different metric types and their
// vector versions for basic usage. // vector versions for basic usage.
// //
@ -95,8 +95,8 @@
// SummaryVec, HistogramVec, and UntypedVec are not. // SummaryVec, HistogramVec, and UntypedVec are not.
// //
// To create instances of Metrics and their vector versions, you need a suitable // To create instances of Metrics and their vector versions, you need a suitable
// …Opts struct, i.e. GaugeOpts, CounterOpts, SummaryOpts, // …Opts struct, i.e. GaugeOpts, CounterOpts, SummaryOpts, HistogramOpts, or
// HistogramOpts, or UntypedOpts. // UntypedOpts.
// //
// Custom Collectors and constant Metrics // Custom Collectors and constant Metrics
// //
@ -114,8 +114,8 @@
// Metric instances “on the fly” using NewConstMetric, NewConstHistogram, and // Metric instances “on the fly” using NewConstMetric, NewConstHistogram, and
// NewConstSummary (and their respective Must… versions). That will happen in // NewConstSummary (and their respective Must… versions). That will happen in
// the Collect method. The Describe method has to return separate Desc // the Collect method. The Describe method has to return separate Desc
// instances, representative of the “throw-away” metrics to be created // instances, representative of the “throw-away” metrics to be created later.
// later. NewDesc comes in handy to create those Desc instances. // NewDesc comes in handy to create those Desc instances.
// //
// The Collector example illustrates the use case. You can also look at the // The Collector example illustrates the use case. You can also look at the
// source code of the processCollector (mirroring process metrics), the // source code of the processCollector (mirroring process metrics), the
@ -129,32 +129,32 @@
// Advanced Uses of the Registry // Advanced Uses of the Registry
// //
// While MustRegister is the by far most common way of registering a Collector, // While MustRegister is the by far most common way of registering a Collector,
// sometimes you might want to handle the errors the registration might // sometimes you might want to handle the errors the registration might cause.
// cause. As suggested by the name, MustRegister panics if an error occurs. With // As suggested by the name, MustRegister panics if an error occurs. With the
// the Register function, the error is returned and can be handled. // Register function, the error is returned and can be handled.
// //
// An error is returned if the registered Collector is incompatible or // An error is returned if the registered Collector is incompatible or
// inconsistent with already registered metrics. The registry aims for // inconsistent with already registered metrics. The registry aims for
// consistency of the collected metrics according to the Prometheus data // consistency of the collected metrics according to the Prometheus data model.
// model. Inconsistencies are ideally detected at registration time, not at // Inconsistencies are ideally detected at registration time, not at collect
// collect time. The former will usually be detected at start-up time of a // time. The former will usually be detected at start-up time of a program,
// program, while the latter will only happen at scrape time, possibly not even // while the latter will only happen at scrape time, possibly not even on the
// on the first scrape if the inconsistency only becomes relevant later. That is // first scrape if the inconsistency only becomes relevant later. That is the
// the main reason why a Collector and a Metric have to describe themselves to // main reason why a Collector and a Metric have to describe themselves to the
// the registry. // registry.
// //
// So far, everything we did operated on the so-called default registry, as it // So far, everything we did operated on the so-called default registry, as it
// can be found in the global DefaultRegistry variable. With NewRegistry, you // can be found in the global DefaultRegistry variable. With NewRegistry, you
// can create a custom registry, or you can even implement the Registerer or // can create a custom registry, or you can even implement the Registerer or
// Gatherer interfaces yourself. The methods Register and Unregister work in // Gatherer interfaces yourself. The methods Register and Unregister work in the
// the same way on a custom registry as the global functions Register and // same way on a custom registry as the global functions Register and Unregister
// Unregister on the default registry. // on the default registry.
// //
// There are a number of uses for custom registries: You can use registries // There are a number of uses for custom registries: You can use registries with
// with special properties, see NewPedanticRegistry. You can avoid global state, // special properties, see NewPedanticRegistry. You can avoid global state, as
// as it is imposed by the DefaultRegistry. You can use multiple registries at // it is imposed by the DefaultRegistry. You can use multiple registries at the
// the same time to expose different metrics in different ways. You can use // same time to expose different metrics in different ways. You can use separate
// separate registries for testing purposes. // registries for testing purposes.
// //
// Also note that the DefaultRegistry comes registered with a Collector for Go // Also note that the DefaultRegistry comes registered with a Collector for Go
// runtime metrics (via NewGoCollector) and a Collector for process metrics (via // runtime metrics (via NewGoCollector) and a Collector for process metrics (via
@ -166,16 +166,20 @@
// The Registry implements the Gatherer interface. The caller of the Gather // The Registry implements the Gatherer interface. The caller of the Gather
// method can then expose the gathered metrics in some way. Usually, the metrics // method can then expose the gathered metrics in some way. Usually, the metrics
// are served via HTTP on the /metrics endpoint. That's happening in the example // are served via HTTP on the /metrics endpoint. That's happening in the example
// above. The tools to expose metrics via HTTP are in the promhttp // above. The tools to expose metrics via HTTP are in the promhttp sub-package.
// sub-package. (The top-level functions in the prometheus package are // (The top-level functions in the prometheus package are deprecated.)
// deprecated.)
// //
// Pushing to the Pushgateway // Pushing to the Pushgateway
// //
// Function for pushing to the Pushgateway can be found in the push sub-package. // Function for pushing to the Pushgateway can be found in the push sub-package.
// //
// Graphite Bridge
//
// Functions and examples to push metrics from a Gatherer to Graphite can be
// found in the graphite sub-package.
//
// Other Means of Exposition // Other Means of Exposition
// //
// More ways of exposing metrics can easily be added. Sending metrics to // More ways of exposing metrics can easily be added by following the approaches
// Graphite would be an example that will soon be implemented. // of the existing implementations.
package prometheus package prometheus

View file

@ -27,16 +27,21 @@ type Gauge interface {
// Set sets the Gauge to an arbitrary value. // Set sets the Gauge to an arbitrary value.
Set(float64) Set(float64)
// Inc increments the Gauge by 1. // Inc increments the Gauge by 1. Use Add to increment it by arbitrary
// values.
Inc() Inc()
// Dec decrements the Gauge by 1. // Dec decrements the Gauge by 1. Use Sub to decrement it by arbitrary
// values.
Dec() Dec()
// Add adds the given value to the Gauge. (The value can be // Add adds the given value to the Gauge. (The value can be negative,
// negative, resulting in a decrease of the Gauge.) // resulting in a decrease of the Gauge.)
Add(float64) Add(float64)
// Sub subtracts the given value from the Gauge. (The value can be // Sub subtracts the given value from the Gauge. (The value can be
// negative, resulting in an increase of the Gauge.) // negative, resulting in an increase of the Gauge.)
Sub(float64) Sub(float64)
// SetToCurrentTime sets the Gauge to the current Unix time in seconds.
SetToCurrentTime()
} }
// GaugeOpts is an alias for Opts. See there for doc comments. // GaugeOpts is an alias for Opts. See there for doc comments.

View file

@ -8,7 +8,8 @@ import (
) )
type goCollector struct { type goCollector struct {
goroutines Gauge goroutinesDesc *Desc
threadsDesc *Desc
gcDesc *Desc gcDesc *Desc
// metrics to describe and collect // metrics to describe and collect
@ -19,11 +20,14 @@ type goCollector struct {
// go process. // go process.
func NewGoCollector() Collector { func NewGoCollector() Collector {
return &goCollector{ return &goCollector{
goroutines: NewGauge(GaugeOpts{ goroutinesDesc: NewDesc(
Namespace: "go", "go_goroutines",
Name: "goroutines", "Number of goroutines that currently exist.",
Help: "Number of goroutines that currently exist.", nil, nil),
}), threadsDesc: NewDesc(
"go_threads",
"Number of OS threads created",
nil, nil),
gcDesc: NewDesc( gcDesc: NewDesc(
"go_gc_duration_seconds", "go_gc_duration_seconds",
"A summary of the GC invocation durations.", "A summary of the GC invocation durations.",
@ -48,7 +52,7 @@ func NewGoCollector() Collector {
}, { }, {
desc: NewDesc( desc: NewDesc(
memstatNamespace("sys_bytes"), memstatNamespace("sys_bytes"),
"Number of bytes obtained by system. Sum of all system allocations.", "Number of bytes obtained from system.",
nil, nil, nil, nil,
), ),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.Sys) }, eval: func(ms *runtime.MemStats) float64 { return float64(ms.Sys) },
@ -111,12 +115,12 @@ func NewGoCollector() Collector {
valType: GaugeValue, valType: GaugeValue,
}, { }, {
desc: NewDesc( desc: NewDesc(
memstatNamespace("heap_released_bytes_total"), memstatNamespace("heap_released_bytes"),
"Total number of heap bytes released to OS.", "Number of heap bytes released to OS.",
nil, nil, nil, nil,
), ),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapReleased) }, eval: func(ms *runtime.MemStats) float64 { return float64(ms.HeapReleased) },
valType: CounterValue, valType: GaugeValue,
}, { }, {
desc: NewDesc( desc: NewDesc(
memstatNamespace("heap_objects"), memstatNamespace("heap_objects"),
@ -213,6 +217,14 @@ func NewGoCollector() Collector {
), ),
eval: func(ms *runtime.MemStats) float64 { return float64(ms.LastGC) / 1e9 }, eval: func(ms *runtime.MemStats) float64 { return float64(ms.LastGC) / 1e9 },
valType: GaugeValue, valType: GaugeValue,
}, {
desc: NewDesc(
memstatNamespace("gc_cpu_fraction"),
"The fraction of this program's available CPU time used by the GC since the program started.",
nil, nil,
),
eval: func(ms *runtime.MemStats) float64 { return ms.GCCPUFraction },
valType: GaugeValue,
}, },
}, },
} }
@ -224,9 +236,9 @@ func memstatNamespace(s string) string {
// Describe returns all descriptions of the collector. // Describe returns all descriptions of the collector.
func (c *goCollector) Describe(ch chan<- *Desc) { func (c *goCollector) Describe(ch chan<- *Desc) {
ch <- c.goroutines.Desc() ch <- c.goroutinesDesc
ch <- c.threadsDesc
ch <- c.gcDesc ch <- c.gcDesc
for _, i := range c.metrics { for _, i := range c.metrics {
ch <- i.desc ch <- i.desc
} }
@ -234,8 +246,9 @@ func (c *goCollector) Describe(ch chan<- *Desc) {
// Collect returns the current state of all metrics of the collector. // Collect returns the current state of all metrics of the collector.
func (c *goCollector) Collect(ch chan<- Metric) { func (c *goCollector) Collect(ch chan<- Metric) {
c.goroutines.Set(float64(runtime.NumGoroutine())) ch <- MustNewConstMetric(c.goroutinesDesc, GaugeValue, float64(runtime.NumGoroutine()))
ch <- c.goroutines n, _ := runtime.ThreadCreateProfile(nil)
ch <- MustNewConstMetric(c.threadsDesc, GaugeValue, float64(n))
var stats debug.GCStats var stats debug.GCStats
stats.PauseQuantiles = make([]time.Duration, 5) stats.PauseQuantiles = make([]time.Duration, 5)

View file

@ -62,7 +62,7 @@ func giveBuf(buf *bytes.Buffer) {
// //
// Deprecated: Please note the issues described in the doc comment of // Deprecated: Please note the issues described in the doc comment of
// InstrumentHandler. You might want to consider using promhttp.Handler instead // InstrumentHandler. You might want to consider using promhttp.Handler instead
// (which is non instrumented). // (which is not instrumented).
func Handler() http.Handler { func Handler() http.Handler {
return InstrumentHandler("prometheus", UninstrumentedHandler()) return InstrumentHandler("prometheus", UninstrumentedHandler())
} }
@ -172,6 +172,9 @@ func nowSeries(t ...time.Time) nower {
// httputil.ReverseProxy is a prominent example for a handler // httputil.ReverseProxy is a prominent example for a handler
// performing such writes. // performing such writes.
// //
// - It has additional issues with HTTP/2, cf.
// https://github.com/prometheus/client_golang/issues/272.
//
// Upcoming versions of this package will provide ways of instrumenting HTTP // Upcoming versions of this package will provide ways of instrumenting HTTP
// handlers that are more flexible and have fewer issues. Please prefer direct // handlers that are more flexible and have fewer issues. Please prefer direct
// instrumentation in the meantime. // instrumentation in the meantime.
@ -190,6 +193,7 @@ func InstrumentHandlerFunc(handlerName string, handlerFunc func(http.ResponseWri
SummaryOpts{ SummaryOpts{
Subsystem: "http", Subsystem: "http",
ConstLabels: Labels{"handler": handlerName}, ConstLabels: Labels{"handler": handlerName},
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
}, },
handlerFunc, handlerFunc,
) )
@ -245,34 +249,52 @@ func InstrumentHandlerFuncWithOpts(opts SummaryOpts, handlerFunc func(http.Respo
}, },
instLabels, instLabels,
) )
if err := Register(reqCnt); err != nil {
if are, ok := err.(AlreadyRegisteredError); ok {
reqCnt = are.ExistingCollector.(*CounterVec)
} else {
panic(err)
}
}
opts.Name = "request_duration_microseconds" opts.Name = "request_duration_microseconds"
opts.Help = "The HTTP request latencies in microseconds." opts.Help = "The HTTP request latencies in microseconds."
reqDur := NewSummary(opts) reqDur := NewSummary(opts)
if err := Register(reqDur); err != nil {
if are, ok := err.(AlreadyRegisteredError); ok {
reqDur = are.ExistingCollector.(Summary)
} else {
panic(err)
}
}
opts.Name = "request_size_bytes" opts.Name = "request_size_bytes"
opts.Help = "The HTTP request sizes in bytes." opts.Help = "The HTTP request sizes in bytes."
reqSz := NewSummary(opts) reqSz := NewSummary(opts)
if err := Register(reqSz); err != nil {
if are, ok := err.(AlreadyRegisteredError); ok {
reqSz = are.ExistingCollector.(Summary)
} else {
panic(err)
}
}
opts.Name = "response_size_bytes" opts.Name = "response_size_bytes"
opts.Help = "The HTTP response sizes in bytes." opts.Help = "The HTTP response sizes in bytes."
resSz := NewSummary(opts) resSz := NewSummary(opts)
if err := Register(resSz); err != nil {
regReqCnt := MustRegisterOrGet(reqCnt).(*CounterVec) if are, ok := err.(AlreadyRegisteredError); ok {
regReqDur := MustRegisterOrGet(reqDur).(Summary) resSz = are.ExistingCollector.(Summary)
regReqSz := MustRegisterOrGet(reqSz).(Summary) } else {
regResSz := MustRegisterOrGet(resSz).(Summary) panic(err)
}
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
now := time.Now() now := time.Now()
delegate := &responseWriterDelegator{ResponseWriter: w} delegate := &responseWriterDelegator{ResponseWriter: w}
out := make(chan int) out := computeApproximateRequestSize(r)
urlLen := 0
if r.URL != nil {
urlLen = len(r.URL.String())
}
go computeApproximateRequestSize(r, out, urlLen)
_, cn := w.(http.CloseNotifier) _, cn := w.(http.CloseNotifier)
_, fl := w.(http.Flusher) _, fl := w.(http.Flusher)
@ -290,14 +312,24 @@ func InstrumentHandlerFuncWithOpts(opts SummaryOpts, handlerFunc func(http.Respo
method := sanitizeMethod(r.Method) method := sanitizeMethod(r.Method)
code := sanitizeCode(delegate.status) code := sanitizeCode(delegate.status)
regReqCnt.WithLabelValues(method, code).Inc() reqCnt.WithLabelValues(method, code).Inc()
regReqDur.Observe(elapsed) reqDur.Observe(elapsed)
regResSz.Observe(float64(delegate.written)) resSz.Observe(float64(delegate.written))
regReqSz.Observe(float64(<-out)) reqSz.Observe(float64(<-out))
}) })
} }
func computeApproximateRequestSize(r *http.Request, out chan int, s int) { func computeApproximateRequestSize(r *http.Request) <-chan int {
// Get URL length in current go routine for avoiding a race condition.
// HandlerFunc that runs in parallel may modify the URL.
s := 0
if r.URL != nil {
s += len(r.URL.String())
}
out := make(chan int, 1)
go func() {
s += len(r.Method) s += len(r.Method)
s += len(r.Proto) s += len(r.Proto)
for name, values := range r.Header { for name, values := range r.Header {
@ -314,6 +346,10 @@ func computeApproximateRequestSize(r *http.Request, out chan int, s int) {
s += int(r.ContentLength) s += int(r.ContentLength)
} }
out <- s out <- s
close(out)
}()
return out
} }
type responseWriterDelegator struct { type responseWriterDelegator struct {

View file

@ -19,10 +19,10 @@ type processCollector struct {
pid int pid int
collectFn func(chan<- Metric) collectFn func(chan<- Metric)
pidFn func() (int, error) pidFn func() (int, error)
cpuTotal Counter cpuTotal *Desc
openFDs, maxFDs Gauge openFDs, maxFDs *Desc
vsize, rss Gauge vsize, rss *Desc
startTime Gauge startTime *Desc
} }
// NewProcessCollector returns a collector which exports the current state of // NewProcessCollector returns a collector which exports the current state of
@ -44,40 +44,45 @@ func NewProcessCollectorPIDFn(
pidFn func() (int, error), pidFn func() (int, error),
namespace string, namespace string,
) Collector { ) Collector {
ns := ""
if len(namespace) > 0 {
ns = namespace + "_"
}
c := processCollector{ c := processCollector{
pidFn: pidFn, pidFn: pidFn,
collectFn: func(chan<- Metric) {}, collectFn: func(chan<- Metric) {},
cpuTotal: NewCounter(CounterOpts{ cpuTotal: NewDesc(
Namespace: namespace, ns+"process_cpu_seconds_total",
Name: "process_cpu_seconds_total", "Total user and system CPU time spent in seconds.",
Help: "Total user and system CPU time spent in seconds.", nil, nil,
}), ),
openFDs: NewGauge(GaugeOpts{ openFDs: NewDesc(
Namespace: namespace, ns+"process_open_fds",
Name: "process_open_fds", "Number of open file descriptors.",
Help: "Number of open file descriptors.", nil, nil,
}), ),
maxFDs: NewGauge(GaugeOpts{ maxFDs: NewDesc(
Namespace: namespace, ns+"process_max_fds",
Name: "process_max_fds", "Maximum number of open file descriptors.",
Help: "Maximum number of open file descriptors.", nil, nil,
}), ),
vsize: NewGauge(GaugeOpts{ vsize: NewDesc(
Namespace: namespace, ns+"process_virtual_memory_bytes",
Name: "process_virtual_memory_bytes", "Virtual memory size in bytes.",
Help: "Virtual memory size in bytes.", nil, nil,
}), ),
rss: NewGauge(GaugeOpts{ rss: NewDesc(
Namespace: namespace, ns+"process_resident_memory_bytes",
Name: "process_resident_memory_bytes", "Resident memory size in bytes.",
Help: "Resident memory size in bytes.", nil, nil,
}), ),
startTime: NewGauge(GaugeOpts{ startTime: NewDesc(
Namespace: namespace, ns+"process_start_time_seconds",
Name: "process_start_time_seconds", "Start time of the process since unix epoch in seconds.",
Help: "Start time of the process since unix epoch in seconds.", nil, nil,
}), ),
} }
// Set up process metric collection if supported by the runtime. // Set up process metric collection if supported by the runtime.
@ -90,12 +95,12 @@ func NewProcessCollectorPIDFn(
// Describe returns all descriptions of the collector. // Describe returns all descriptions of the collector.
func (c *processCollector) Describe(ch chan<- *Desc) { func (c *processCollector) Describe(ch chan<- *Desc) {
ch <- c.cpuTotal.Desc() ch <- c.cpuTotal
ch <- c.openFDs.Desc() ch <- c.openFDs
ch <- c.maxFDs.Desc() ch <- c.maxFDs
ch <- c.vsize.Desc() ch <- c.vsize
ch <- c.rss.Desc() ch <- c.rss
ch <- c.startTime.Desc() ch <- c.startTime
} }
// Collect returns the current state of all metrics of the collector. // Collect returns the current state of all metrics of the collector.
@ -117,26 +122,19 @@ func (c *processCollector) processCollect(ch chan<- Metric) {
} }
if stat, err := p.NewStat(); err == nil { if stat, err := p.NewStat(); err == nil {
c.cpuTotal.Set(stat.CPUTime()) ch <- MustNewConstMetric(c.cpuTotal, CounterValue, stat.CPUTime())
ch <- c.cpuTotal ch <- MustNewConstMetric(c.vsize, GaugeValue, float64(stat.VirtualMemory()))
c.vsize.Set(float64(stat.VirtualMemory())) ch <- MustNewConstMetric(c.rss, GaugeValue, float64(stat.ResidentMemory()))
ch <- c.vsize
c.rss.Set(float64(stat.ResidentMemory()))
ch <- c.rss
if startTime, err := stat.StartTime(); err == nil { if startTime, err := stat.StartTime(); err == nil {
c.startTime.Set(startTime) ch <- MustNewConstMetric(c.startTime, GaugeValue, startTime)
ch <- c.startTime
} }
} }
if fds, err := p.FileDescriptorsLen(); err == nil { if fds, err := p.FileDescriptorsLen(); err == nil {
c.openFDs.Set(float64(fds)) ch <- MustNewConstMetric(c.openFDs, GaugeValue, float64(fds))
ch <- c.openFDs
} }
if limits, err := p.NewLimits(); err == nil { if limits, err := p.NewLimits(); err == nil {
c.maxFDs.Set(float64(limits.OpenFiles)) ch <- MustNewConstMetric(c.maxFDs, GaugeValue, float64(limits.OpenFiles))
ch <- c.maxFDs
} }
} }

View file

@ -152,38 +152,6 @@ func MustRegister(cs ...Collector) {
DefaultRegisterer.MustRegister(cs...) DefaultRegisterer.MustRegister(cs...)
} }
// RegisterOrGet registers the provided Collector with the DefaultRegisterer and
// returns the Collector, unless an equal Collector was registered before, in
// which case that Collector is returned.
//
// Deprecated: RegisterOrGet is merely a convenience function for the
// implementation as described in the documentation for
// AlreadyRegisteredError. As the use case is relatively rare, this function
// will be removed in a future version of this package to clean up the
// namespace.
func RegisterOrGet(c Collector) (Collector, error) {
if err := Register(c); err != nil {
if are, ok := err.(AlreadyRegisteredError); ok {
return are.ExistingCollector, nil
}
return nil, err
}
return c, nil
}
// MustRegisterOrGet behaves like RegisterOrGet but panics instead of returning
// an error.
//
// Deprecated: This is deprecated for the same reason RegisterOrGet is. See
// there for details.
func MustRegisterOrGet(c Collector) Collector {
c, err := RegisterOrGet(c)
if err != nil {
panic(err)
}
return c
}
// Unregister removes the registration of the provided Collector from the // Unregister removes the registration of the provided Collector from the
// DefaultRegisterer. // DefaultRegisterer.
// //
@ -201,25 +169,6 @@ func (gf GathererFunc) Gather() ([]*dto.MetricFamily, error) {
return gf() return gf()
} }
// SetMetricFamilyInjectionHook replaces the DefaultGatherer with one that
// gathers from the previous DefaultGatherers but then merges the MetricFamily
// protobufs returned from the provided hook function with the MetricFamily
// protobufs returned from the original DefaultGatherer.
//
// Deprecated: This function manipulates the DefaultGatherer variable. Consider
// the implications, i.e. don't do this concurrently with any uses of the
// DefaultGatherer. In the rare cases where you need to inject MetricFamily
// protobufs directly, it is recommended to use a custom Registry and combine it
// with a custom Gatherer using the Gatherers type (see
// there). SetMetricFamilyInjectionHook only exists for compatibility reasons
// with previous versions of this package.
func SetMetricFamilyInjectionHook(hook func() []*dto.MetricFamily) {
DefaultGatherer = Gatherers{
DefaultGatherer,
GathererFunc(func() ([]*dto.MetricFamily, error) { return hook(), nil }),
}
}
// AlreadyRegisteredError is returned by the Register method if the Collector to // AlreadyRegisteredError is returned by the Register method if the Collector to
// be registered has already been registered before, or a different Collector // be registered has already been registered before, or a different Collector
// that collects the same metrics has been registered before. Registration fails // that collects the same metrics has been registered before. Registration fails
@ -294,7 +243,7 @@ func (r *Registry) Register(c Collector) error {
}() }()
r.mtx.Lock() r.mtx.Lock()
defer r.mtx.Unlock() defer r.mtx.Unlock()
// Coduct various tests... // Conduct various tests...
for desc := range descChan { for desc := range descChan {
// Is the descriptor valid at all? // Is the descriptor valid at all?
@ -447,7 +396,7 @@ func (r *Registry) Gather() ([]*dto.MetricFamily, error) {
// Drain metricChan in case of premature return. // Drain metricChan in case of premature return.
defer func() { defer func() {
for _ = range metricChan { for range metricChan {
} }
}() }()
@ -683,7 +632,7 @@ func (s metricSorter) Less(i, j int) bool {
return s[i].GetTimestampMs() < s[j].GetTimestampMs() return s[i].GetTimestampMs() < s[j].GetTimestampMs()
} }
// normalizeMetricFamilies returns a MetricFamily slice whith empty // normalizeMetricFamilies returns a MetricFamily slice with empty
// MetricFamilies pruned and the remaining MetricFamilies sorted by name within // MetricFamilies pruned and the remaining MetricFamilies sorted by name within
// the slice, with the contained Metrics sorted within each MetricFamily. // the slice, with the contained Metrics sorted within each MetricFamily.
func normalizeMetricFamilies(metricFamiliesByName map[string]*dto.MetricFamily) []*dto.MetricFamily { func normalizeMetricFamilies(metricFamiliesByName map[string]*dto.MetricFamily) []*dto.MetricFamily {

View file

@ -54,6 +54,9 @@ type Summary interface {
} }
// DefObjectives are the default Summary quantile values. // DefObjectives are the default Summary quantile values.
//
// Deprecated: DefObjectives will not be used as the default objectives in
// v0.10 of the library. The default Summary will have no quantiles then.
var ( var (
DefObjectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001} DefObjectives = map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}
@ -113,9 +116,15 @@ type SummaryOpts struct {
ConstLabels Labels ConstLabels Labels
// Objectives defines the quantile rank estimates with their respective // Objectives defines the quantile rank estimates with their respective
// absolute error. If Objectives[q] = e, then the value reported // absolute error. If Objectives[q] = e, then the value reported for q
// for q will be the φ-quantile value for some φ between q-e and q+e. // will be the φ-quantile value for some φ between q-e and q+e. The
// The default value is DefObjectives. // default value is DefObjectives. It is used if Objectives is left at
// its zero value (i.e. nil). To create a Summary without Objectives,
// set it to an empty map (i.e. map[float64]float64{}).
//
// Deprecated: Note that the current value of DefObjectives is
// deprecated. It will be replaced by an empty map in v0.10 of the
// library. Please explicitly set Objectives to the desired value.
Objectives map[float64]float64 Objectives map[float64]float64
// MaxAge defines the duration for which an observation stays relevant // MaxAge defines the duration for which an observation stays relevant
@ -183,7 +192,7 @@ func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary {
} }
} }
if len(opts.Objectives) == 0 { if opts.Objectives == nil {
opts.Objectives = DefObjectives opts.Objectives = DefObjectives
} }

View file

@ -0,0 +1,74 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import "time"
// Observer is the interface that wraps the Observe method, which is used by
// Histogram and Summary to add observations.
type Observer interface {
Observe(float64)
}
// The ObserverFunc type is an adapter to allow the use of ordinary
// functions as Observers. If f is a function with the appropriate
// signature, ObserverFunc(f) is an Observer that calls f.
//
// This adapter is usually used in connection with the Timer type, and there are
// two general use cases:
//
// The most common one is to use a Gauge as the Observer for a Timer.
// See the "Gauge" Timer example.
//
// The more advanced use case is to create a function that dynamically decides
// which Observer to use for observing the duration. See the "Complex" Timer
// example.
type ObserverFunc func(float64)
// Observe calls f(value). It implements Observer.
func (f ObserverFunc) Observe(value float64) {
f(value)
}
// Timer is a helper type to time functions. Use NewTimer to create new
// instances.
type Timer struct {
begin time.Time
observer Observer
}
// NewTimer creates a new Timer. The provided Observer is used to observe a
// duration in seconds. Timer is usually used to time a function call in the
// following way:
// func TimeMe() {
// timer := NewTimer(myHistogram)
// defer timer.ObserveDuration()
// // Do actual work.
// }
func NewTimer(o Observer) *Timer {
return &Timer{
begin: time.Now(),
observer: o,
}
}
// ObserveDuration records the duration passed since the Timer was created with
// NewTimer. It calls the Observe method of the Observer provided during
// construction with the duration in seconds as an argument. ObserveDuration is
// usually called with a defer statement.
func (t *Timer) ObserveDuration() {
if t.observer != nil {
t.observer.Observe(time.Since(t.begin).Seconds())
}
}

View file

@ -20,6 +20,11 @@ package prometheus
// no type information is implied. // no type information is implied.
// //
// To create Untyped instances, use NewUntyped. // To create Untyped instances, use NewUntyped.
//
// Deprecated: The Untyped type is deprecated because it doesn't make sense in
// direct instrumentation. If you need to mirror an external metric of unknown
// type (usually while writing exporters), Use MustNewConstMetric to create an
// untyped metric instance on the fly.
type Untyped interface { type Untyped interface {
Metric Metric
Collector Collector

View file

@ -19,6 +19,7 @@ import (
"math" "math"
"sort" "sort"
"sync/atomic" "sync/atomic"
"time"
dto "github.com/prometheus/client_model/go" dto "github.com/prometheus/client_model/go"
@ -43,7 +44,7 @@ var errInconsistentCardinality = errors.New("inconsistent label cardinality")
// ValueType. This is a low-level building block used by the library to back the // ValueType. This is a low-level building block used by the library to back the
// implementations of Counter, Gauge, and Untyped. // implementations of Counter, Gauge, and Untyped.
type value struct { type value struct {
// valBits containst the bits of the represented float64 value. It has // valBits contains the bits of the represented float64 value. It has
// to go first in the struct to guarantee alignment for atomic // to go first in the struct to guarantee alignment for atomic
// operations. http://golang.org/pkg/sync/atomic/#pkg-note-BUG // operations. http://golang.org/pkg/sync/atomic/#pkg-note-BUG
valBits uint64 valBits uint64
@ -80,6 +81,10 @@ func (v *value) Set(val float64) {
atomic.StoreUint64(&v.valBits, math.Float64bits(val)) atomic.StoreUint64(&v.valBits, math.Float64bits(val))
} }
func (v *value) SetToCurrentTime() {
v.Set(float64(time.Now().UnixNano()) / 1e9)
}
func (v *value) Inc() { func (v *value) Inc() {
v.Add(1) v.Add(1)
} }

18
vendor/vendor.json vendored
View file

@ -603,10 +603,10 @@
"revisionTime": "2016-06-15T09:26:46Z" "revisionTime": "2016-06-15T09:26:46Z"
}, },
{ {
"checksumSHA1": "KkB+77Ziom7N6RzSbyUwYGrmDeU=", "checksumSHA1": "d2irkxoHgazkTuLIvJGiYwagl8o=",
"path": "github.com/prometheus/client_golang/prometheus", "path": "github.com/prometheus/client_golang/prometheus",
"revision": "c5b7fccd204277076155f10851dad72b76a49317", "revision": "08fd2e12372a66e68e30523c7642e0cbc3e4fbde",
"revisionTime": "2016-08-17T15:48:24Z" "revisionTime": "2017-04-01T10:34:46Z"
}, },
{ {
"checksumSHA1": "DvwvOlPNAgRntBzt3b3OSRMS2N4=", "checksumSHA1": "DvwvOlPNAgRntBzt3b3OSRMS2N4=",
@ -615,10 +615,10 @@
"revisionTime": "2015-02-12T10:17:44Z" "revisionTime": "2015-02-12T10:17:44Z"
}, },
{ {
"checksumSHA1": "jG8qYuDUuaZeflt4JxBBdyQBsXw=", "checksumSHA1": "Wtpzndm/+bdwwNU5PCTfb4oUhc8=",
"path": "github.com/prometheus/common/expfmt", "path": "github.com/prometheus/common/expfmt",
"revision": "dd2f054febf4a6c00f2343686efb775948a8bff4", "revision": "49fee292b27bfff7f354ee0f64e1bc4850462edf",
"revisionTime": "2017-01-08T23:12:12Z" "revisionTime": "2017-02-20T10:38:46Z"
}, },
{ {
"checksumSHA1": "GWlM3d2vPYyNATtTFgftS10/A9w=", "checksumSHA1": "GWlM3d2vPYyNATtTFgftS10/A9w=",
@ -633,10 +633,10 @@
"revisionTime": "2017-01-08T23:12:12Z" "revisionTime": "2017-01-08T23:12:12Z"
}, },
{ {
"checksumSHA1": "vopCLXHzYm+3l5fPKOf4/fQwrCM=", "checksumSHA1": "0LL9u9tfv1KPBjNEiMDP6q7lpog=",
"path": "github.com/prometheus/common/model", "path": "github.com/prometheus/common/model",
"revision": "3007b6072c17c8d985734e6e19b1dea9174e13d3", "revision": "49fee292b27bfff7f354ee0f64e1bc4850462edf",
"revisionTime": "2017-02-19T00:35:58+01:00" "revisionTime": "2017-02-20T10:38:46Z"
}, },
{ {
"checksumSHA1": "ZbbESWBHHcPUJ/A5yrzKhTHuPc8=", "checksumSHA1": "ZbbESWBHHcPUJ/A5yrzKhTHuPc8=",

View file

@ -19,6 +19,7 @@ import (
"fmt" "fmt"
"math" "math"
"net/http" "net/http"
"net/url"
"strconv" "strconv"
"time" "time"
@ -50,6 +51,7 @@ const (
errorCanceled = "canceled" errorCanceled = "canceled"
errorExec = "execution" errorExec = "execution"
errorBadData = "bad_data" errorBadData = "bad_data"
errorInternal = "internal"
) )
var corsHeaders = map[string]string{ var corsHeaders = map[string]string{
@ -73,7 +75,7 @@ type targetRetriever interface {
} }
type alertmanagerRetriever interface { type alertmanagerRetriever interface {
Alertmanagers() []string Alertmanagers() []*url.URL
} }
type response struct { type response struct {
@ -194,6 +196,8 @@ func (api *API) query(r *http.Request) (interface{}, *apiError) {
return nil, &apiError{errorCanceled, res.Err} return nil, &apiError{errorCanceled, res.Err}
case promql.ErrQueryTimeout: case promql.ErrQueryTimeout:
return nil, &apiError{errorTimeout, res.Err} return nil, &apiError{errorTimeout, res.Err}
case promql.ErrStorage:
return nil, &apiError{errorInternal, res.Err}
} }
return nil, &apiError{errorExec, res.Err} return nil, &apiError{errorExec, res.Err}
} }
@ -362,7 +366,7 @@ func (api *API) dropSeries(r *http.Request) (interface{}, *apiError) {
} }
// TODO(fabxc): temporarily disabled // TODO(fabxc): temporarily disabled
panic("disabled") return nil, &apiError{errorExec, fmt.Errorf("temporarily disabled")}
// numDeleted := 0 // numDeleted := 0
// for _, s := range r.Form["match[]"] { // for _, s := range r.Form["match[]"] {
@ -442,8 +446,8 @@ func (api *API) alertmanagers(r *http.Request) (interface{}, *apiError) {
urls := api.alertmanagerRetriever.Alertmanagers() urls := api.alertmanagerRetriever.Alertmanagers()
ams := &AlertmanagerDiscovery{ActiveAlertmanagers: make([]*AlertmanagerTarget, len(urls))} ams := &AlertmanagerDiscovery{ActiveAlertmanagers: make([]*AlertmanagerTarget, len(urls))}
for i := range urls { for i, url := range urls {
ams.ActiveAlertmanagers[i] = &AlertmanagerTarget{URL: urls[i]} ams.ActiveAlertmanagers[i] = &AlertmanagerTarget{URL: url.String()}
} }
return ams, nil return ams, nil
@ -474,6 +478,8 @@ func respondError(w http.ResponseWriter, apiErr *apiError, data interface{}) {
code = 422 code = 422
case errorCanceled, errorTimeout: case errorCanceled, errorTimeout:
code = http.StatusServiceUnavailable code = http.StatusServiceUnavailable
case errorInternal:
code = http.StatusInternalServerError
default: default:
code = http.StatusInternalServerError code = http.StatusInternalServerError
} }

View file

@ -41,9 +41,9 @@ func (f targetRetrieverFunc) Targets() []*retrieval.Target {
return f() return f()
} }
type alertmanagerRetrieverFunc func() []string type alertmanagerRetrieverFunc func() []*url.URL
func (f alertmanagerRetrieverFunc) Alertmanagers() []string { func (f alertmanagerRetrieverFunc) Alertmanagers() []*url.URL {
return f() return f()
} }
@ -79,8 +79,12 @@ func TestEndpoints(t *testing.T) {
} }
}) })
ar := alertmanagerRetrieverFunc(func() []string { ar := alertmanagerRetrieverFunc(func() []*url.URL {
return []string{"http://alertmanager.example.com:8080/api/v1/alerts"} return []*url.URL{{
Scheme: "http",
Host: "alertmanager.example.com:8080",
Path: "/api/v1/alerts",
}}
}) })
api := &API{ api := &API{
@ -430,15 +434,27 @@ func TestEndpoints(t *testing.T) {
// }{2}, // }{2},
// }, { // }, {
// endpoint: api.targets, // endpoint: api.targets,
// response: []*Target{ // response: &TargetDiscovery{
// &Target{ // ActiveTargets: []*Target{
// DiscoveredLabels: nil, // {
// Labels: nil, // DiscoveredLabels: model.LabelSet{},
// ScrapeUrl: "http://example.com:8080/metrics", // Labels: model.LabelSet{},
// ScrapeURL: "http://example.com:8080/metrics",
// Health: "unknown", // Health: "unknown",
// }, // },
// }, // },
// }, // },
// },
{
endpoint: api.alertmanagers,
response: &AlertmanagerDiscovery{
ActiveAlertmanagers: []*AlertmanagerTarget{
{
URL: "http://alertmanager.example.com:8080/api/v1/alerts",
},
},
},
},
} }
for _, test := range tests { for _, test := range tests {

File diff suppressed because one or more lines are too long

View file

@ -44,23 +44,27 @@ body {
.legend { .legend {
display: inline-block; display: inline-block;
vertical-align: top; vertical-align: top;
margin: 0 0 0 40px; margin: 0 0 0 60px;
} }
.graph_area { .graph_area {
position: relative; position: relative;
font-family: Arial, Helvetica, sans-serif; font-family: Arial, Helvetica, sans-serif;
margin: 5px 0 5px 0; margin: 5px 0 5px 20px;
} }
.y_axis { .y_axis {
overflow: hidden; overflow: visible;
position: absolute; position: absolute;
top: 1px; top: 1px;
bottom: 0; bottom: 0;
width: 40px; width: 40px;
} }
.y_axis svg {
overflow: visible;
}
.graph .detail .item.active { .graph .detail .item.active {
line-height: 1.4em; line-height: 1.4em;
padding: 0.5em; padding: 0.5em;
@ -126,7 +130,7 @@ input[name="end_input"], input[name="range_input"] {
} }
.prometheus_input_group.range_input { .prometheus_input_group.range_input {
margin-left: 39px; margin-left: 59px;
} }
.prometheus_input_group .btn { .prometheus_input_group .btn {

View file

@ -415,7 +415,7 @@ Prometheus.Graph.prototype.submitQuery = function() {
return; return;
} }
var duration = new Date().getTime() - startTime; var duration = new Date().getTime() - startTime;
var totalTimeSeries = xhr.responseJSON.data.result.length; var totalTimeSeries = (xhr.responseJSON.data !== undefined) ? xhr.responseJSON.data.result.length : 0;
self.evalStats.html("Load time: " + duration + "ms <br /> Resolution: " + resolution + "s <br />" + "Total time series: " + totalTimeSeries); self.evalStats.html("Load time: " + duration + "ms <br /> Resolution: " + resolution + "s <br />" + "Total time series: " + totalTimeSeries);
self.spinner.hide(); self.spinner.hide();
} }
@ -556,6 +556,27 @@ Prometheus.Graph.prototype.updateGraph = function() {
min: "auto", min: "auto",
}); });
// Find and set graph's max/min
var min = Infinity;
var max = -Infinity;
self.data.forEach(function(timeSeries) {
timeSeries.data.forEach(function(dataPoint) {
if (dataPoint.y < min && dataPoint.y != null) {
min = dataPoint.y;
}
if (dataPoint.y > max && dataPoint.y != null) {
max = dataPoint.y;
}
});
});
if (min === max) {
self.rickshawGraph.max = max + 1;
self.rickshawGraph.min = min - 1;
} else {
self.rickshawGraph.max = max + (0.1*(Math.abs(max - min)));
self.rickshawGraph.min = min - (0.1*(Math.abs(max - min)));
}
var xAxis = new Rickshaw.Graph.Axis.Time({ graph: self.rickshawGraph }); var xAxis = new Rickshaw.Graph.Axis.Time({ graph: self.rickshawGraph });
var yAxis = new Rickshaw.Graph.Axis.Y({ var yAxis = new Rickshaw.Graph.Axis.Y({

View file

@ -578,7 +578,7 @@ PromConsole.Graph.prototype.dispatch = function() {
} }
var loadingImg = document.createElement("img"); var loadingImg = document.createElement("img");
loadingImg.src = PATH_PREFIX + '/static/img/ajax-loader.gif?v=' + BUILD_VERSION; loadingImg.src = PATH_PREFIX + '/static/img/ajax-loader.gif';
loadingImg.alt = 'Loading...'; loadingImg.alt = 'Loading...';
loadingImg.className = 'prom_graph_loading'; loadingImg.className = 'prom_graph_loading';
this.graphTd.appendChild(loadingImg); this.graphTd.appendChild(loadingImg);

View file

@ -54,7 +54,8 @@
</tr> </tr>
{{range .Alertmanagers}} {{range .Alertmanagers}}
<tr> <tr>
<td>{{.}}</td> {{/* Alertmanager URLs always have Scheme, Host and Path set */}}
<td>{{.Scheme}}://<a href="{{.Scheme}}://{{.Host}}">{{.Host}}</a>{{.Path}}</td>
</tr> </tr>
{{end}} {{end}}
</tbody> </tbody>

View file

@ -348,7 +348,7 @@ func (h *Handler) status(w http.ResponseWriter, r *http.Request) {
Birth time.Time Birth time.Time
CWD string CWD string
Version *PrometheusVersion Version *PrometheusVersion
Alertmanagers []string Alertmanagers []*url.URL
}{ }{
Birth: h.birth, Birth: h.birth,
CWD: h.cwd, CWD: h.cwd,