Optimise PromQL (#3966)

* Move range logic to 'eval'

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make aggregegate range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* PromQL is statically typed, so don't eval to find the type.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Extend rangewrapper to multiple exprs

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Start making function evaluation ranged

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make instant queries a special case of range queries

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Eliminate evalString

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Evaluate range vector functions one series at a time

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make unary operators range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make binops range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Pass time to range-aware functions.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make simple _over_time functions range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reduce allocs when working with matrix selectors

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Add basic benchmark for range evaluation

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reuse objects for function arguments

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Do dropmetricname and allocating output vector only once.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Add range-aware support for range vector functions with params

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Optimise holt_winters, cut cpu and allocs by ~25%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make rate&friends range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make more functions range aware. Document calling convention.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make date functions range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make simple math functions range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Convert more functions to be range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make more functions range aware

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Specialcase timestamp() with vector selector arg for range awareness

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove transition code for functions

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove the rest of the engine transition code

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove more obselete code

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove the last uses of the eval* functions

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove engine finalizers to prevent corruption

The finalizers set by matrixSelector were being called
just before the value they were retruning to the pool
was then being provided to the caller. Thus a concurrent query
could corrupt the data that the user has just been returned.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Add new benchmark suite for range functinos

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Migrate existing benchmarks to new system

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Expand promql benchmarks

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Simply test by removing unused range code

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* When testing instant queries, check range queries too.

To protect against subsequent steps in a range query being
affected by the previous steps, add a test that evaluates
an instant query that we know works again as a range query
with the tiimestamp we care about not being the first step.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reuse ring for matrix iters. Put query results back in pool.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reuse buffer when iterating over matrix selectors

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Unary minus should remove metric name

Cut down benchmarks for faster runs.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reduce repetition in benchmark test cases

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Work series by series when doing normal vectorSelectors

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Optimise benchmark setup, cuts time by 60%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Have rangeWrapper use an evalNodeHelper to cache across steps

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Use evalNodeHelper with functions

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Cache dropMetricName within a node evaluation.

This saves both the calculations and allocs done by dropMetricName
across steps.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reuse input vectors in rangewrapper

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Reuse the point slices in the matrixes input/output by rangeWrapper

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make benchmark setup faster using AddFast

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Simplify benchmark code.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Add caching in VectorBinop

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Use xor to have one-level resultMetric hash key

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Add more benchmarks

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Call Query.Close in apiv1

This allows point slices allocated for the response data
to be reused by later queries, saving allocations.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Optimise histogram_quantile

It's now 5-10% faster with 97% less garbage generated for 1k steps

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make the input collection in rangeVector linear rather than quadratic

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Optimise label_replace, for 1k steps 15x fewer allocs and 3x faster

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Optimise label_join, 1.8x faster and 11x less memory for 1k steps

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Expand benchmarks, cleanup comments, simplify numSteps logic.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Address Fabian's comments

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Comments from Alin.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Address jrv's comments

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Remove dead code

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Address Simon's comments.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Rename populateIterators, pre-init some sizes

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Handle case where function has non-matrix args first

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Split rangeWrapper out to rangeEval function, improve comments

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Cleanup and make things more consistent

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Make EvalNodeHelper public

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>

* Fabian's comments.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
This commit is contained in:
Brian Brazil 2018-06-04 14:47:45 +01:00 committed by Fabian Reinartz
parent 9dc763cc03
commit dd6781add2
12 changed files with 1226 additions and 990 deletions

View file

@ -132,9 +132,8 @@ type MatrixSelector struct {
Offset time.Duration Offset time.Duration
LabelMatchers []*labels.Matcher LabelMatchers []*labels.Matcher
// The series iterators are populated at query preparation time. // The series are populated at query preparation time.
series []storage.Series series []storage.Series
iterators []*storage.BufferedSeriesIterator
} }
// NumberLiteral represents a number. // NumberLiteral represents a number.
@ -166,9 +165,8 @@ type VectorSelector struct {
Offset time.Duration Offset time.Duration
LabelMatchers []*labels.Matcher LabelMatchers []*labels.Matcher
// The series iterators are populated at query preparation time. // The series are populated at query preparation time.
series []storage.Series series []storage.Series
iterators []*storage.BufferedSeriesIterator
} }
func (e *AggregateExpr) Type() ValueType { return ValueTypeVector } func (e *AggregateExpr) Type() ValueType { return ValueTypeVector }

View file

@ -13,36 +13,183 @@
package promql package promql
import "testing" import (
"context"
"fmt"
"strconv"
"strings"
"testing"
"time"
// A Benchmark holds context for running a unit test as a benchmark. "github.com/prometheus/prometheus/pkg/labels"
type Benchmark struct { "github.com/prometheus/prometheus/util/testutil"
b *testing.B )
t *Test
iterCount int func BenchmarkRangeQuery(b *testing.B) {
storage := testutil.NewStorage(b)
defer storage.Close()
engine := NewEngine(nil, nil, 10, 100*time.Second)
metrics := []labels.Labels{}
metrics = append(metrics, labels.FromStrings("__name__", "a_one"))
metrics = append(metrics, labels.FromStrings("__name__", "b_one"))
for j := 0; j < 10; j++ {
metrics = append(metrics, labels.FromStrings("__name__", "h_one", "le", strconv.Itoa(j)))
}
metrics = append(metrics, labels.FromStrings("__name__", "h_one", "le", "+Inf"))
for i := 0; i < 10; i++ {
metrics = append(metrics, labels.FromStrings("__name__", "a_ten", "l", strconv.Itoa(i)))
metrics = append(metrics, labels.FromStrings("__name__", "b_ten", "l", strconv.Itoa(i)))
for j := 0; j < 10; j++ {
metrics = append(metrics, labels.FromStrings("__name__", "h_ten", "l", strconv.Itoa(i), "le", strconv.Itoa(j)))
}
metrics = append(metrics, labels.FromStrings("__name__", "h_ten", "l", strconv.Itoa(i), "le", "+Inf"))
} }
// NewBenchmark returns an initialized empty Benchmark. for i := 0; i < 100; i++ {
func NewBenchmark(b *testing.B, input string) *Benchmark { metrics = append(metrics, labels.FromStrings("__name__", "a_hundred", "l", strconv.Itoa(i)))
t, err := NewTest(b, input) metrics = append(metrics, labels.FromStrings("__name__", "b_hundred", "l", strconv.Itoa(i)))
for j := 0; j < 10; j++ {
metrics = append(metrics, labels.FromStrings("__name__", "h_hundred", "l", strconv.Itoa(i), "le", strconv.Itoa(j)))
}
metrics = append(metrics, labels.FromStrings("__name__", "h_hundred", "l", strconv.Itoa(i), "le", "+Inf"))
}
refs := make([]uint64, len(metrics))
// A day of data plus 10k steps.
numIntervals := 8640 + 10000
for s := 0; s < numIntervals; s += 1 {
a, err := storage.Appender()
if err != nil { if err != nil {
b.Fatalf("Unable to run benchmark: %s", err) b.Fatal(err)
} }
return &Benchmark{ ts := int64(s * 10000) // 10s interval.
b: b, for i, metric := range metrics {
t: t, err := a.AddFast(metric, refs[i], ts, float64(s))
if err != nil {
refs[i], _ = a.Add(metric, ts, float64(s))
}
}
if err := a.Commit(); err != nil {
b.Fatal(err)
} }
} }
// Run runs the benchmark. type benchCase struct {
func (b *Benchmark) Run() { expr string
defer b.t.Close() steps int
b.b.ReportAllocs()
b.b.ResetTimer()
for i := 0; i < b.b.N; i++ {
if err := b.t.RunAsBenchmark(b); err != nil {
b.b.Error(err)
} }
b.iterCount++ cases := []benchCase{
// Simple rate.
{
expr: "rate(a_X[1m])",
},
{
expr: "rate(a_X[1m])",
steps: 10000,
},
// Holt-Winters and long ranges.
{
expr: "holt_winters(a_X[1d], 0.3, 0.3)",
},
{
expr: "changes(a_X[1d])",
},
{
expr: "rate(a_X[1d])",
},
// Unary operators.
{
expr: "-a_X",
},
// Binary operators.
{
expr: "a_X - b_X",
},
{
expr: "a_X - b_X",
steps: 10000,
},
{
expr: "a_X and b_X{l=~'.*[0-4]$'}",
},
{
expr: "a_X or b_X{l=~'.*[0-4]$'}",
},
{
expr: "a_X unless b_X{l=~'.*[0-4]$'}",
},
// Simple functions.
{
expr: "abs(a_X)",
},
{
expr: "label_replace(a_X, 'l2', '$1', 'l', '(.*)')",
},
{
expr: "label_join(a_X, 'l2', '-', 'l', 'l')",
},
// Combinations.
{
expr: "rate(a_X[1m]) + rate(b_X[1m])",
},
{
expr: "sum without (l)(rate(a_X[1m]))",
},
{
expr: "sum without (l)(rate(a_X[1m])) / sum without (l)(rate(b_X[1m]))",
},
{
expr: "histogram_quantile(0.9, rate(h_X[5m]))",
},
}
// X in an expr will be replaced by different metric sizes.
tmp := []benchCase{}
for _, c := range cases {
if !strings.Contains(c.expr, "X") {
tmp = append(tmp, c)
} else {
tmp = append(tmp, benchCase{expr: strings.Replace(c.expr, "X", "one", -1), steps: c.steps})
tmp = append(tmp, benchCase{expr: strings.Replace(c.expr, "X", "ten", -1), steps: c.steps})
tmp = append(tmp, benchCase{expr: strings.Replace(c.expr, "X", "hundred", -1), steps: c.steps})
}
}
cases = tmp
// No step will be replaced by cases with the standard step.
tmp = []benchCase{}
for _, c := range cases {
if c.steps != 0 {
tmp = append(tmp, c)
} else {
tmp = append(tmp, benchCase{expr: c.expr, steps: 1})
tmp = append(tmp, benchCase{expr: c.expr, steps: 10})
tmp = append(tmp, benchCase{expr: c.expr, steps: 100})
tmp = append(tmp, benchCase{expr: c.expr, steps: 1000})
}
}
cases = tmp
for _, c := range cases {
name := fmt.Sprintf("expr=%s,steps=%d", c.expr, c.steps)
b.Run(name, func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
qry, err := engine.NewRangeQuery(
storage, c.expr,
time.Unix(int64((numIntervals-c.steps)*10), 0),
time.Unix(int64(numIntervals*10), 0), time.Second*10)
if err != nil {
b.Fatal(err)
}
res := qry.Exec(context.Background())
if res.Err != nil {
b.Fatal(res.Err)
}
qry.Close()
}
})
} }
} }

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -23,63 +23,6 @@ import (
"github.com/prometheus/prometheus/util/testutil" "github.com/prometheus/prometheus/util/testutil"
) )
func BenchmarkHoltWinters4Week5Min(b *testing.B) {
input := `
clear
load 5m
http_requests{path="/foo"} 0+10x8064
eval instant at 4w holt_winters(http_requests[4w], 0.3, 0.3)
{path="/foo"} 80640
`
bench := NewBenchmark(b, input)
bench.Run()
}
func BenchmarkHoltWinters1Week5Min(b *testing.B) {
input := `
clear
load 5m
http_requests{path="/foo"} 0+10x2016
eval instant at 1w holt_winters(http_requests[1w], 0.3, 0.3)
{path="/foo"} 20160
`
bench := NewBenchmark(b, input)
bench.Run()
}
func BenchmarkHoltWinters1Day1Min(b *testing.B) {
input := `
clear
load 1m
http_requests{path="/foo"} 0+10x1440
eval instant at 1d holt_winters(http_requests[1d], 0.3, 0.3)
{path="/foo"} 14400
`
bench := NewBenchmark(b, input)
bench.Run()
}
func BenchmarkChanges1Day1Min(b *testing.B) {
input := `
clear
load 1m
http_requests{path="/foo"} 0+10x1440
eval instant at 1d changes(http_requests[1d])
{path="/foo"} 1440
`
bench := NewBenchmark(b, input)
bench.Run()
}
func TestDeriv(t *testing.T) { func TestDeriv(t *testing.T) {
// https://github.com/prometheus/prometheus/issues/2674#issuecomment-315439393 // https://github.com/prometheus/prometheus/issues/2674#issuecomment-315439393
// This requires more precision than the usual test system offers, // This requires more precision than the usual test system offers,

View file

@ -160,7 +160,7 @@ func (t *Test) parseEval(lines []string, i int) (int, *evalCmd, error) {
} }
ts := testStartTime.Add(time.Duration(offset)) ts := testStartTime.Add(time.Duration(offset))
cmd := newEvalCmd(expr, ts, ts, 0) cmd := newEvalCmd(expr, ts)
switch mod { switch mod {
case "ordered": case "ordered":
cmd.ordered = true cmd.ordered = true
@ -302,10 +302,8 @@ func (cmd *loadCmd) append(a storage.Appender) error {
// and expects a specific result. // and expects a specific result.
type evalCmd struct { type evalCmd struct {
expr string expr string
start, end time.Time start time.Time
interval time.Duration
instant bool
fail, ordered bool fail, ordered bool
metrics map[uint64]labels.Labels metrics map[uint64]labels.Labels
@ -321,13 +319,10 @@ func (e entry) String() string {
return fmt.Sprintf("%d: %s", e.pos, e.vals) return fmt.Sprintf("%d: %s", e.pos, e.vals)
} }
func newEvalCmd(expr string, start, end time.Time, interval time.Duration) *evalCmd { func newEvalCmd(expr string, start time.Time) *evalCmd {
return &evalCmd{ return &evalCmd{
expr: expr, expr: expr,
start: start, start: start,
end: end,
interval: interval,
instant: start == end && interval == 0,
metrics: map[uint64]labels.Labels{}, metrics: map[uint64]labels.Labels{},
expected: map[uint64]entry{}, expected: map[uint64]entry{},
@ -354,37 +349,9 @@ func (ev *evalCmd) expect(pos int, m labels.Labels, vals ...sequenceValue) {
func (ev *evalCmd) compareResult(result Value) error { func (ev *evalCmd) compareResult(result Value) error {
switch val := result.(type) { switch val := result.(type) {
case Matrix: case Matrix:
if ev.instant {
return fmt.Errorf("received range result on instant evaluation") return fmt.Errorf("received range result on instant evaluation")
}
seen := map[uint64]bool{}
for pos, v := range val {
fp := v.Metric.Hash()
if _, ok := ev.metrics[fp]; !ok {
return fmt.Errorf("unexpected metric %s in result", v.Metric)
}
exp := ev.expected[fp]
if ev.ordered && exp.pos != pos+1 {
return fmt.Errorf("expected metric %s with %v at position %d but was at %d", v.Metric, exp.vals, exp.pos, pos+1)
}
for i, expVal := range exp.vals {
if !almostEqual(expVal.value, v.Points[i].V) {
return fmt.Errorf("expected %v for %s but got %v", expVal, v.Metric, v.Points)
}
}
seen[fp] = true
}
for fp, expVals := range ev.expected {
if !seen[fp] {
return fmt.Errorf("expected metric %s with %v not found", ev.metrics[fp], expVals)
}
}
case Vector: case Vector:
if !ev.instant {
return fmt.Errorf("received instant result on range evaluation")
}
seen := map[uint64]bool{} seen := map[uint64]bool{}
for pos, v := range val { for pos, v := range val {
fp := v.Metric.Hash() fp := v.Metric.Hash()
@ -464,8 +431,7 @@ func (t *Test) exec(tc testCommand) error {
} }
case *evalCmd: case *evalCmd:
qry, _ := ParseExpr(cmd.expr) q, _ := t.queryEngine.NewInstantQuery(t.storage, cmd.expr, cmd.start)
q := t.queryEngine.newQuery(t.storage, qry, cmd.start, cmd.end, cmd.interval)
res := q.Exec(t.context) res := q.Exec(t.context)
if res.Err != nil { if res.Err != nil {
if cmd.fail { if cmd.fail {
@ -473,6 +439,7 @@ func (t *Test) exec(tc testCommand) error {
} }
return fmt.Errorf("error evaluating query %q: %s", cmd.expr, res.Err) return fmt.Errorf("error evaluating query %q: %s", cmd.expr, res.Err)
} }
defer q.Close()
if res.Err == nil && cmd.fail { if res.Err == nil && cmd.fail {
return fmt.Errorf("expected error evaluating query but got none") return fmt.Errorf("expected error evaluating query but got none")
} }
@ -482,6 +449,37 @@ func (t *Test) exec(tc testCommand) error {
return fmt.Errorf("error in %s %s: %s", cmd, cmd.expr, err) return fmt.Errorf("error in %s %s: %s", cmd, cmd.expr, err)
} }
// Check query returns same result in range mode,
/// by checking against the middle step.
q, _ = t.queryEngine.NewRangeQuery(t.storage, cmd.expr, cmd.start.Add(-time.Minute), cmd.start.Add(time.Minute), time.Minute)
rangeRes := q.Exec(t.context)
if rangeRes.Err != nil {
return fmt.Errorf("error evaluating query %q in range mode: %s", cmd.expr, rangeRes.Err)
}
defer q.Close()
if cmd.ordered {
// Ordering isn't defined for range queries.
return nil
}
mat := rangeRes.Value.(Matrix)
vec := make(Vector, 0, len(mat))
for _, series := range mat {
for _, point := range series.Points {
if point.T == timeMilliseconds(cmd.start) {
vec = append(vec, Sample{Metric: series.Metric, Point: point})
break
}
}
}
if _, ok := res.Value.(Scalar); ok {
err = cmd.compareResult(Scalar{V: vec[0].Point.V})
} else {
err = cmd.compareResult(vec)
}
if err != nil {
return fmt.Errorf("error in %s %s rande mode: %s", cmd, cmd.expr, err)
}
default: default:
panic("promql.Test.exec: unknown test command type") panic("promql.Test.exec: unknown test command type")
} }

View file

@ -1,40 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, softwar
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package promql
// RunAsBenchmark runs the test in benchmark mode.
// This will not count any loads or non eval functions.
func (t *Test) RunAsBenchmark(b *Benchmark) error {
for _, cmd := range t.cmds {
switch cmd.(type) {
// Only time the "eval" command.
case *evalCmd:
err := t.exec(cmd)
if err != nil {
return err
}
default:
if b.iterCount == 0 {
b.b.StopTimer()
err := t.exec(cmd)
if err != nil {
return err
}
b.b.StartTimer()
}
}
}
return nil
}

View file

@ -22,6 +22,19 @@ eval instant at 50m 2 - SUM(http_requests) BY (job)
{job="api-server"} -998 {job="api-server"} -998
{job="app-server"} -2598 {job="app-server"} -2598
eval instant at 50m -http_requests{job="api-server",instance="0",group="production"}
{job="api-server",instance="0",group="production"} -100
eval instant at 50m +http_requests{job="api-server",instance="0",group="production"}
http_requests{job="api-server",instance="0",group="production"} 100
eval instant at 50m - - - SUM(http_requests) BY (job)
{job="api-server"} -1000
{job="app-server"} -2600
eval instant at 50m - - - 1
-1
eval instant at 50m 1000 / SUM(http_requests) BY (job) eval instant at 50m 1000 / SUM(http_requests) BY (job)
{job="api-server"} 1 {job="api-server"} 1
{job="app-server"} 0.38461538461538464 {job="app-server"} 0.38461538461538464

View file

@ -30,23 +30,30 @@ type BufferedSeriesIterator struct {
// of the current element and the duration of delta before. // of the current element and the duration of delta before.
func NewBuffer(it SeriesIterator, delta int64) *BufferedSeriesIterator { func NewBuffer(it SeriesIterator, delta int64) *BufferedSeriesIterator {
bit := &BufferedSeriesIterator{ bit := &BufferedSeriesIterator{
it: it,
buf: newSampleRing(delta, 16), buf: newSampleRing(delta, 16),
lastTime: math.MinInt64,
ok: true,
} }
it.Next() bit.Reset(it)
return bit return bit
} }
// Reset re-uses the buffer with a new iterator.
func (b *BufferedSeriesIterator) Reset(it SeriesIterator) {
b.it = it
b.lastTime = math.MinInt64
b.ok = true
b.buf.reset()
it.Next()
}
// PeekBack returns the nth previous element of the iterator. If there is none buffered, // PeekBack returns the nth previous element of the iterator. If there is none buffered,
// ok is false. // ok is false.
func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool) { func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool) {
return b.buf.nthLast(n) return b.buf.nthLast(n)
} }
// Buffer returns an iterator over the buffered data. // Buffer returns an iterator over the buffered data. Invalidates previously
// returned iterators.
func (b *BufferedSeriesIterator) Buffer() SeriesIterator { func (b *BufferedSeriesIterator) Buffer() SeriesIterator {
return b.buf.iterator() return b.buf.iterator()
} }
@ -118,6 +125,8 @@ type sampleRing struct {
i int // position of most recent element in ring buffer i int // position of most recent element in ring buffer
f int // position of first element in ring buffer f int // position of first element in ring buffer
l int // number of elements in buffer l int // number of elements in buffer
it sampleRingIterator
} }
func newSampleRing(delta int64, sz int) *sampleRing { func newSampleRing(delta int64, sz int) *sampleRing {
@ -133,8 +142,11 @@ func (r *sampleRing) reset() {
r.f = 0 r.f = 0
} }
// Returns the current iterator. Invalidates previously retuned iterators.
func (r *sampleRing) iterator() SeriesIterator { func (r *sampleRing) iterator() SeriesIterator {
return &sampleRingIterator{r: r, i: -1} r.it.r = r
r.it.i = -1
return &r.it
} }
type sampleRingIterator struct { type sampleRingIterator struct {

View file

@ -23,7 +23,6 @@ const (
ResultSortTime ResultSortTime
QueryPreparationTime QueryPreparationTime
InnerEvalTime InnerEvalTime
ResultAppendTime
ExecQueueTime ExecQueueTime
ExecTotalTime ExecTotalTime
) )
@ -39,8 +38,6 @@ func (s QueryTiming) String() string {
return "Query preparation time" return "Query preparation time"
case InnerEvalTime: case InnerEvalTime:
return "Inner eval time" return "Inner eval time"
case ResultAppendTime:
return "Result append time"
case ExecQueueTime: case ExecQueueTime:
return "Exec queue wait time" return "Exec queue wait time"
case ExecTotalTime: case ExecTotalTime:
@ -56,7 +53,6 @@ type queryTimings struct {
ResultSortTime float64 `json:"resultSortTime"` ResultSortTime float64 `json:"resultSortTime"`
QueryPreparationTime float64 `json:"queryPreparationTime"` QueryPreparationTime float64 `json:"queryPreparationTime"`
InnerEvalTime float64 `json:"innerEvalTime"` InnerEvalTime float64 `json:"innerEvalTime"`
ResultAppendTime float64 `json:"resultAppendTime"`
ExecQueueTime float64 `json:"execQueueTime"` ExecQueueTime float64 `json:"execQueueTime"`
ExecTotalTime float64 `json:"execTotalTime"` ExecTotalTime float64 `json:"execTotalTime"`
} }
@ -81,8 +77,6 @@ func NewQueryStats(tg *TimerGroup) *QueryStats {
qt.QueryPreparationTime = timer.Duration() qt.QueryPreparationTime = timer.Duration()
case InnerEvalTime: case InnerEvalTime:
qt.InnerEvalTime = timer.Duration() qt.InnerEvalTime = timer.Duration()
case ResultAppendTime:
qt.ResultAppendTime = timer.Duration()
case ExecQueueTime: case ExecQueueTime:
qt.ExecQueueTime = timer.Duration() qt.ExecQueueTime = timer.Duration()
case ExecTotalTime: case ExecTotalTime:

View file

@ -105,7 +105,7 @@ func setCORS(w http.ResponseWriter) {
} }
} }
type apiFunc func(r *http.Request) (interface{}, *apiError) type apiFunc func(r *http.Request) (interface{}, *apiError, func())
// API can register a set of endpoints in a router and handle // API can register a set of endpoints in a router and handle
// them using the provided storage and query engine. // them using the provided storage and query engine.
@ -156,13 +156,17 @@ func (api *API) Register(r *route.Router) {
wrap := func(f apiFunc) http.HandlerFunc { wrap := func(f apiFunc) http.HandlerFunc {
hf := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { hf := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
setCORS(w) setCORS(w)
if data, err := f(r); err != nil { data, err, finalizer := f(r)
if err != nil {
respondError(w, err, data) respondError(w, err, data)
} else if data != nil { } else if data != nil {
respond(w, data) respond(w, data)
} else { } else {
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
} }
if finalizer != nil {
finalizer()
}
}) })
return api.ready(httputil.CompressionHandler{ return api.ready(httputil.CompressionHandler{
Handler: hf, Handler: hf,
@ -200,17 +204,17 @@ type queryData struct {
Stats *stats.QueryStats `json:"stats,omitempty"` Stats *stats.QueryStats `json:"stats,omitempty"`
} }
func (api *API) options(r *http.Request) (interface{}, *apiError) { func (api *API) options(r *http.Request) (interface{}, *apiError, func()) {
return nil, nil return nil, nil, nil
} }
func (api *API) query(r *http.Request) (interface{}, *apiError) { func (api *API) query(r *http.Request) (interface{}, *apiError, func()) {
var ts time.Time var ts time.Time
if t := r.FormValue("time"); t != "" { if t := r.FormValue("time"); t != "" {
var err error var err error
ts, err = parseTime(t) ts, err = parseTime(t)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
} else { } else {
ts = api.now() ts = api.now()
@ -221,7 +225,7 @@ func (api *API) query(r *http.Request) (interface{}, *apiError) {
var cancel context.CancelFunc var cancel context.CancelFunc
timeout, err := parseDuration(to) timeout, err := parseDuration(to)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
ctx, cancel = context.WithTimeout(ctx, timeout) ctx, cancel = context.WithTimeout(ctx, timeout)
@ -230,20 +234,20 @@ func (api *API) query(r *http.Request) (interface{}, *apiError) {
qry, err := api.QueryEngine.NewInstantQuery(api.Queryable, r.FormValue("query"), ts) qry, err := api.QueryEngine.NewInstantQuery(api.Queryable, r.FormValue("query"), ts)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
res := qry.Exec(ctx) res := qry.Exec(ctx)
if res.Err != nil { if res.Err != nil {
switch res.Err.(type) { switch res.Err.(type) {
case promql.ErrQueryCanceled: case promql.ErrQueryCanceled:
return nil, &apiError{errorCanceled, res.Err} return nil, &apiError{errorCanceled, res.Err}, qry.Close
case promql.ErrQueryTimeout: case promql.ErrQueryTimeout:
return nil, &apiError{errorTimeout, res.Err} return nil, &apiError{errorTimeout, res.Err}, qry.Close
case promql.ErrStorage: case promql.ErrStorage:
return nil, &apiError{errorInternal, res.Err} return nil, &apiError{errorInternal, res.Err}, qry.Close
} }
return nil, &apiError{errorExec, res.Err} return nil, &apiError{errorExec, res.Err}, qry.Close
} }
// Optional stats field in response if parameter "stats" is not empty. // Optional stats field in response if parameter "stats" is not empty.
@ -256,38 +260,38 @@ func (api *API) query(r *http.Request) (interface{}, *apiError) {
ResultType: res.Value.Type(), ResultType: res.Value.Type(),
Result: res.Value, Result: res.Value,
Stats: qs, Stats: qs,
}, nil }, nil, qry.Close
} }
func (api *API) queryRange(r *http.Request) (interface{}, *apiError) { func (api *API) queryRange(r *http.Request) (interface{}, *apiError, func()) {
start, err := parseTime(r.FormValue("start")) start, err := parseTime(r.FormValue("start"))
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
end, err := parseTime(r.FormValue("end")) end, err := parseTime(r.FormValue("end"))
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
if end.Before(start) { if end.Before(start) {
err := errors.New("end timestamp must not be before start time") err := errors.New("end timestamp must not be before start time")
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
step, err := parseDuration(r.FormValue("step")) step, err := parseDuration(r.FormValue("step"))
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
if step <= 0 { if step <= 0 {
err := errors.New("zero or negative query resolution step widths are not accepted. Try a positive integer") err := errors.New("zero or negative query resolution step widths are not accepted. Try a positive integer")
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
// For safety, limit the number of returned points per timeseries. // For safety, limit the number of returned points per timeseries.
// This is sufficient for 60s resolution for a week or 1h resolution for a year. // This is sufficient for 60s resolution for a week or 1h resolution for a year.
if end.Sub(start)/step > 11000 { if end.Sub(start)/step > 11000 {
err := errors.New("exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)") err := errors.New("exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)")
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
ctx := r.Context() ctx := r.Context()
@ -295,7 +299,7 @@ func (api *API) queryRange(r *http.Request) (interface{}, *apiError) {
var cancel context.CancelFunc var cancel context.CancelFunc
timeout, err := parseDuration(to) timeout, err := parseDuration(to)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
ctx, cancel = context.WithTimeout(ctx, timeout) ctx, cancel = context.WithTimeout(ctx, timeout)
@ -304,18 +308,18 @@ func (api *API) queryRange(r *http.Request) (interface{}, *apiError) {
qry, err := api.QueryEngine.NewRangeQuery(api.Queryable, r.FormValue("query"), start, end, step) qry, err := api.QueryEngine.NewRangeQuery(api.Queryable, r.FormValue("query"), start, end, step)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
res := qry.Exec(ctx) res := qry.Exec(ctx)
if res.Err != nil { if res.Err != nil {
switch res.Err.(type) { switch res.Err.(type) {
case promql.ErrQueryCanceled: case promql.ErrQueryCanceled:
return nil, &apiError{errorCanceled, res.Err} return nil, &apiError{errorCanceled, res.Err}, qry.Close
case promql.ErrQueryTimeout: case promql.ErrQueryTimeout:
return nil, &apiError{errorTimeout, res.Err} return nil, &apiError{errorTimeout, res.Err}, qry.Close
} }
return nil, &apiError{errorExec, res.Err} return nil, &apiError{errorExec, res.Err}, qry.Close
} }
// Optional stats field in response if parameter "stats" is not empty. // Optional stats field in response if parameter "stats" is not empty.
@ -328,28 +332,28 @@ func (api *API) queryRange(r *http.Request) (interface{}, *apiError) {
ResultType: res.Value.Type(), ResultType: res.Value.Type(),
Result: res.Value, Result: res.Value,
Stats: qs, Stats: qs,
}, nil }, nil, qry.Close
} }
func (api *API) labelValues(r *http.Request) (interface{}, *apiError) { func (api *API) labelValues(r *http.Request) (interface{}, *apiError, func()) {
ctx := r.Context() ctx := r.Context()
name := route.Param(ctx, "name") name := route.Param(ctx, "name")
if !model.LabelNameRE.MatchString(name) { if !model.LabelNameRE.MatchString(name) {
return nil, &apiError{errorBadData, fmt.Errorf("invalid label name: %q", name)} return nil, &apiError{errorBadData, fmt.Errorf("invalid label name: %q", name)}, nil
} }
q, err := api.Queryable.Querier(ctx, math.MinInt64, math.MaxInt64) q, err := api.Queryable.Querier(ctx, math.MinInt64, math.MaxInt64)
if err != nil { if err != nil {
return nil, &apiError{errorExec, err} return nil, &apiError{errorExec, err}, nil
} }
defer q.Close() defer q.Close()
vals, err := q.LabelValues(name) vals, err := q.LabelValues(name)
if err != nil { if err != nil {
return nil, &apiError{errorExec, err} return nil, &apiError{errorExec, err}, nil
} }
return vals, nil return vals, nil, nil
} }
var ( var (
@ -357,10 +361,10 @@ var (
maxTime = time.Unix(math.MaxInt64/1000-62135596801, 999999999) maxTime = time.Unix(math.MaxInt64/1000-62135596801, 999999999)
) )
func (api *API) series(r *http.Request) (interface{}, *apiError) { func (api *API) series(r *http.Request) (interface{}, *apiError, func()) {
r.ParseForm() r.ParseForm()
if len(r.Form["match[]"]) == 0 { if len(r.Form["match[]"]) == 0 {
return nil, &apiError{errorBadData, fmt.Errorf("no match[] parameter provided")} return nil, &apiError{errorBadData, fmt.Errorf("no match[] parameter provided")}, nil
} }
var start time.Time var start time.Time
@ -368,7 +372,7 @@ func (api *API) series(r *http.Request) (interface{}, *apiError) {
var err error var err error
start, err = parseTime(t) start, err = parseTime(t)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
} else { } else {
start = minTime start = minTime
@ -379,7 +383,7 @@ func (api *API) series(r *http.Request) (interface{}, *apiError) {
var err error var err error
end, err = parseTime(t) end, err = parseTime(t)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
} else { } else {
end = maxTime end = maxTime
@ -389,14 +393,14 @@ func (api *API) series(r *http.Request) (interface{}, *apiError) {
for _, s := range r.Form["match[]"] { for _, s := range r.Form["match[]"] {
matchers, err := promql.ParseMetricSelector(s) matchers, err := promql.ParseMetricSelector(s)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
matcherSets = append(matcherSets, matchers) matcherSets = append(matcherSets, matchers)
} }
q, err := api.Queryable.Querier(r.Context(), timestamp.FromTime(start), timestamp.FromTime(end)) q, err := api.Queryable.Querier(r.Context(), timestamp.FromTime(start), timestamp.FromTime(end))
if err != nil { if err != nil {
return nil, &apiError{errorExec, err} return nil, &apiError{errorExec, err}, nil
} }
defer q.Close() defer q.Close()
@ -404,7 +408,7 @@ func (api *API) series(r *http.Request) (interface{}, *apiError) {
for _, mset := range matcherSets { for _, mset := range matcherSets {
s, err := q.Select(nil, mset...) s, err := q.Select(nil, mset...)
if err != nil { if err != nil {
return nil, &apiError{errorExec, err} return nil, &apiError{errorExec, err}, nil
} }
sets = append(sets, s) sets = append(sets, s)
} }
@ -415,14 +419,14 @@ func (api *API) series(r *http.Request) (interface{}, *apiError) {
metrics = append(metrics, set.At().Labels()) metrics = append(metrics, set.At().Labels())
} }
if set.Err() != nil { if set.Err() != nil {
return nil, &apiError{errorExec, set.Err()} return nil, &apiError{errorExec, set.Err()}, nil
} }
return metrics, nil return metrics, nil, nil
} }
func (api *API) dropSeries(r *http.Request) (interface{}, *apiError) { func (api *API) dropSeries(r *http.Request) (interface{}, *apiError, func()) {
return nil, &apiError{errorInternal, fmt.Errorf("not implemented")} return nil, &apiError{errorInternal, fmt.Errorf("not implemented")}, nil
} }
// Target has the information for one target. // Target has the information for one target.
@ -451,7 +455,7 @@ type TargetDiscovery struct {
DroppedTargets []*DroppedTarget `json:"droppedTargets"` DroppedTargets []*DroppedTarget `json:"droppedTargets"`
} }
func (api *API) targets(r *http.Request) (interface{}, *apiError) { func (api *API) targets(r *http.Request) (interface{}, *apiError, func()) {
tActive := api.targetRetriever.TargetsActive() tActive := api.targetRetriever.TargetsActive()
tDropped := api.targetRetriever.TargetsDropped() tDropped := api.targetRetriever.TargetsDropped()
res := &TargetDiscovery{ActiveTargets: make([]*Target, len(tActive)), DroppedTargets: make([]*DroppedTarget, len(tDropped))} res := &TargetDiscovery{ActiveTargets: make([]*Target, len(tActive)), DroppedTargets: make([]*DroppedTarget, len(tDropped))}
@ -479,7 +483,7 @@ func (api *API) targets(r *http.Request) (interface{}, *apiError) {
DiscoveredLabels: t.DiscoveredLabels().Map(), DiscoveredLabels: t.DiscoveredLabels().Map(),
} }
} }
return res, nil return res, nil, nil
} }
// AlertmanagerDiscovery has all the active Alertmanagers. // AlertmanagerDiscovery has all the active Alertmanagers.
@ -493,7 +497,7 @@ type AlertmanagerTarget struct {
URL string `json:"url"` URL string `json:"url"`
} }
func (api *API) alertmanagers(r *http.Request) (interface{}, *apiError) { func (api *API) alertmanagers(r *http.Request) (interface{}, *apiError, func()) {
urls := api.alertmanagerRetriever.Alertmanagers() urls := api.alertmanagerRetriever.Alertmanagers()
droppedURLS := api.alertmanagerRetriever.DroppedAlertmanagers() droppedURLS := api.alertmanagerRetriever.DroppedAlertmanagers()
ams := &AlertmanagerDiscovery{ActiveAlertmanagers: make([]*AlertmanagerTarget, len(urls)), DroppedAlertmanagers: make([]*AlertmanagerTarget, len(droppedURLS))} ams := &AlertmanagerDiscovery{ActiveAlertmanagers: make([]*AlertmanagerTarget, len(urls)), DroppedAlertmanagers: make([]*AlertmanagerTarget, len(droppedURLS))}
@ -503,22 +507,22 @@ func (api *API) alertmanagers(r *http.Request) (interface{}, *apiError) {
for i, url := range droppedURLS { for i, url := range droppedURLS {
ams.DroppedAlertmanagers[i] = &AlertmanagerTarget{URL: url.String()} ams.DroppedAlertmanagers[i] = &AlertmanagerTarget{URL: url.String()}
} }
return ams, nil return ams, nil, nil
} }
type prometheusConfig struct { type prometheusConfig struct {
YAML string `json:"yaml"` YAML string `json:"yaml"`
} }
func (api *API) serveConfig(r *http.Request) (interface{}, *apiError) { func (api *API) serveConfig(r *http.Request) (interface{}, *apiError, func()) {
cfg := &prometheusConfig{ cfg := &prometheusConfig{
YAML: api.config().String(), YAML: api.config().String(),
} }
return cfg, nil return cfg, nil, nil
} }
func (api *API) serveFlags(r *http.Request) (interface{}, *apiError) { func (api *API) serveFlags(r *http.Request) (interface{}, *apiError, func()) {
return api.flagsMap, nil return api.flagsMap, nil, nil
} }
func (api *API) remoteRead(w http.ResponseWriter, r *http.Request) { func (api *API) remoteRead(w http.ResponseWriter, r *http.Request) {
@ -598,18 +602,18 @@ func (api *API) remoteRead(w http.ResponseWriter, r *http.Request) {
} }
} }
func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError) { func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError, func()) {
if !api.enableAdmin { if !api.enableAdmin {
return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")} return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")}, nil
} }
db := api.db() db := api.db()
if db == nil { if db == nil {
return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")} return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")}, nil
} }
r.ParseForm() r.ParseForm()
if len(r.Form["match[]"]) == 0 { if len(r.Form["match[]"]) == 0 {
return nil, &apiError{errorBadData, fmt.Errorf("no match[] parameter provided")} return nil, &apiError{errorBadData, fmt.Errorf("no match[] parameter provided")}, nil
} }
var start time.Time var start time.Time
@ -617,7 +621,7 @@ func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError) {
var err error var err error
start, err = parseTime(t) start, err = parseTime(t)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
} else { } else {
start = minTime start = minTime
@ -628,7 +632,7 @@ func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError) {
var err error var err error
end, err = parseTime(t) end, err = parseTime(t)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
} else { } else {
end = maxTime end = maxTime
@ -637,7 +641,7 @@ func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError) {
for _, s := range r.Form["match[]"] { for _, s := range r.Form["match[]"] {
matchers, err := promql.ParseMetricSelector(s) matchers, err := promql.ParseMetricSelector(s)
if err != nil { if err != nil {
return nil, &apiError{errorBadData, err} return nil, &apiError{errorBadData, err}, nil
} }
var selector tsdbLabels.Selector var selector tsdbLabels.Selector
@ -646,22 +650,22 @@ func (api *API) deleteSeries(r *http.Request) (interface{}, *apiError) {
} }
if err := db.Delete(timestamp.FromTime(start), timestamp.FromTime(end), selector...); err != nil { if err := db.Delete(timestamp.FromTime(start), timestamp.FromTime(end), selector...); err != nil {
return nil, &apiError{errorInternal, err} return nil, &apiError{errorInternal, err}, nil
} }
} }
return nil, nil return nil, nil, nil
} }
func (api *API) snapshot(r *http.Request) (interface{}, *apiError) { func (api *API) snapshot(r *http.Request) (interface{}, *apiError, func()) {
if !api.enableAdmin { if !api.enableAdmin {
return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")} return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")}, nil
} }
skipHead, _ := strconv.ParseBool(r.FormValue("skip_head")) skipHead, _ := strconv.ParseBool(r.FormValue("skip_head"))
db := api.db() db := api.db()
if db == nil { if db == nil {
return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")} return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")}, nil
} }
var ( var (
@ -672,31 +676,31 @@ func (api *API) snapshot(r *http.Request) (interface{}, *apiError) {
dir = filepath.Join(snapdir, name) dir = filepath.Join(snapdir, name)
) )
if err := os.MkdirAll(dir, 0777); err != nil { if err := os.MkdirAll(dir, 0777); err != nil {
return nil, &apiError{errorInternal, fmt.Errorf("create snapshot directory: %s", err)} return nil, &apiError{errorInternal, fmt.Errorf("create snapshot directory: %s", err)}, nil
} }
if err := db.Snapshot(dir, !skipHead); err != nil { if err := db.Snapshot(dir, !skipHead); err != nil {
return nil, &apiError{errorInternal, fmt.Errorf("create snapshot: %s", err)} return nil, &apiError{errorInternal, fmt.Errorf("create snapshot: %s", err)}, nil
} }
return struct { return struct {
Name string `json:"name"` Name string `json:"name"`
}{name}, nil }{name}, nil, nil
} }
func (api *API) cleanTombstones(r *http.Request) (interface{}, *apiError) { func (api *API) cleanTombstones(r *http.Request) (interface{}, *apiError, func()) {
if !api.enableAdmin { if !api.enableAdmin {
return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")} return nil, &apiError{errorUnavailable, errors.New("Admin APIs disabled")}, nil
} }
db := api.db() db := api.db()
if db == nil { if db == nil {
return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")} return nil, &apiError{errorUnavailable, errors.New("TSDB not ready")}, nil
} }
if err := db.CleanTombstones(); err != nil { if err := db.CleanTombstones(); err != nil {
return nil, &apiError{errorInternal, err} return nil, &apiError{errorInternal, err}, nil
} }
return nil, nil return nil, nil, nil
} }
func convertMatcher(m *labels.Matcher) tsdbLabels.Matcher { func convertMatcher(m *labels.Matcher) tsdbLabels.Matcher {

View file

@ -530,7 +530,7 @@ func TestEndpoints(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
resp, apiErr := test.endpoint(req.WithContext(ctx)) resp, apiErr, _ := test.endpoint(req.WithContext(ctx))
if apiErr != nil { if apiErr != nil {
if test.errType == errorNone { if test.errType == errorNone {
t.Fatalf("Unexpected error: %s", apiErr) t.Fatalf("Unexpected error: %s", apiErr)