prometheus/web/api/v1/api_test.go

2980 lines
73 KiB
Go
Raw Normal View History

// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import (
"context"
"encoding/json"
"fmt"
2015-07-02 01:37:19 -07:00
"io/ioutil"
2018-02-08 09:28:55 -08:00
"math"
"net/http"
"net/http/httptest"
"net/url"
"os"
"reflect"
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069) * tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating. Chained to https://github.com/prometheus/prometheus/pull/7059 * NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it. * Added single SeriesEntry / ChunkEntry for all series implementations. * Unified all vertical, and non vertical for compact and querying to single merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before) * Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples. * Refactored endpoint tests and querier tests to include subtests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed comments from Brian and Beorn. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed snapshot test and added chunk iterator support for DBReadOnly. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed race when iterating over Ats first. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed populate block tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed endpoints test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added test & fixed case of head open chunk. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed DBReadOnly tests and bug producing 1 sample chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added cases for partial block overlap for multiple full chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added extra tests for chunk meta after compaction. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed small vertical merge bug and added more tests for that. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 08:03:02 -07:00
"runtime"
"sort"
"strings"
"testing"
"time"
"github.com/go-kit/log"
"github.com/pkg/errors"
Tail the TSDB WAL for remote_write This change switches the remote_write API to use the TSDB WAL. This should reduce memory usage and prevent sample loss when the remote end point is down. We use the new LiveReader from TSDB to tail WAL segments. Logic for finding the tracking segment is included in this PR. The WAL is tailed once for each remote_write endpoint specified. Reading from the segment is based on a ticker rather than relying on fsnotify write events, which were found to be complicated and unreliable in early prototypes. Enqueuing a sample for sending via remote_write can now block, to provide back pressure. Queues are still required to acheive parallelism and batching. We have updated the queue config based on new defaults for queue capacity and pending samples values - much smaller values are now possible. The remote_write resharding code has been updated to prevent deadlocks, and extra tests have been added for these cases. As part of this change, we attempt to guarantee that samples are not lost; however this initial version doesn't guarantee this across Prometheus restarts or non-retryable errors from the remote end (eg 400s). This changes also includes the following optimisations: - only marshal the proto request once, not once per retry - maintain a single copy of the labels for given series to reduce GC pressure Other minor tweaks: - only reshard if we've also successfully sent recently - add pending samples, latest sent timestamp, WAL events processed metrics Co-authored-by: Chris Marchbanks <csmarchbanks.com> (initial prototype) Co-authored-by: Tom Wilkie <tom.wilkie@gmail.com> (sharding changes) Signed-off-by: Callum Styan <callumstyan@gmail.com>
2018-09-07 14:26:04 -07:00
"github.com/prometheus/client_golang/prometheus"
config_util "github.com/prometheus/common/config"
"github.com/prometheus/common/model"
"github.com/prometheus/common/promlog"
2015-09-24 08:07:11 -07:00
"github.com/prometheus/common/route"
"github.com/stretchr/testify/require"
"github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/pkg/exemplar"
"github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/pkg/textparse"
"github.com/prometheus/prometheus/pkg/timestamp"
2017-10-23 13:28:17 -07:00
"github.com/prometheus/prometheus/prompb"
"github.com/prometheus/prometheus/promql"
"github.com/prometheus/prometheus/promql/parser"
"github.com/prometheus/prometheus/rules"
"github.com/prometheus/prometheus/scrape"
"github.com/prometheus/prometheus/storage"
2017-10-23 13:28:17 -07:00
"github.com/prometheus/prometheus/storage/remote"
"github.com/prometheus/prometheus/tsdb"
"github.com/prometheus/prometheus/util/teststorage"
)
// testMetaStore satisfies the scrape.MetricMetadataStore interface.
// It is used to inject specific metadata as part of a test case.
type testMetaStore struct {
Metadata []scrape.MetricMetadata
}
func (s *testMetaStore) ListMetadata() []scrape.MetricMetadata {
return s.Metadata
}
func (s *testMetaStore) GetMetadata(metric string) (scrape.MetricMetadata, bool) {
for _, m := range s.Metadata {
if metric == m.Metric {
return m, true
}
}
return scrape.MetricMetadata{}, false
}
func (s *testMetaStore) SizeMetadata() int { return 0 }
func (s *testMetaStore) LengthMetadata() int { return 0 }
// testTargetRetriever represents a list of targets to scrape.
// It is used to represent targets as part of test cases.
type testTargetRetriever struct {
activeTargets map[string][]*scrape.Target
droppedTargets map[string][]*scrape.Target
}
type testTargetParams struct {
Identifier string
Labels []labels.Label
DiscoveredLabels []labels.Label
Params url.Values
Reports []*testReport
Active bool
}
type testReport struct {
Start time.Time
Duration time.Duration
Error error
}
func newTestTargetRetriever(targetsInfo []*testTargetParams) *testTargetRetriever {
var activeTargets map[string][]*scrape.Target
var droppedTargets map[string][]*scrape.Target
activeTargets = make(map[string][]*scrape.Target)
droppedTargets = make(map[string][]*scrape.Target)
for _, t := range targetsInfo {
nt := scrape.NewTarget(t.Labels, t.DiscoveredLabels, t.Params)
for _, r := range t.Reports {
nt.Report(r.Start, r.Duration, r.Error)
}
if t.Active {
activeTargets[t.Identifier] = []*scrape.Target{nt}
} else {
droppedTargets[t.Identifier] = []*scrape.Target{nt}
}
}
return &testTargetRetriever{
activeTargets: activeTargets,
droppedTargets: droppedTargets,
}
}
2016-12-02 04:31:43 -08:00
var (
scrapeStart = time.Now().Add(-11 * time.Second)
)
func (t testTargetRetriever) TargetsActive() map[string][]*scrape.Target {
return t.activeTargets
}
func (t testTargetRetriever) TargetsDropped() map[string][]*scrape.Target {
return t.droppedTargets
2016-12-02 04:31:43 -08:00
}
func (t *testTargetRetriever) SetMetadataStoreForTargets(identifier string, metadata scrape.MetricMetadataStore) error {
targets, ok := t.activeTargets[identifier]
if !ok {
return errors.New("targets not found")
}
for _, at := range targets {
at.SetMetadataStore(metadata)
}
return nil
}
func (t *testTargetRetriever) ResetMetadataStore() {
for _, at := range t.activeTargets {
for _, tt := range at {
tt.SetMetadataStore(&testMetaStore{})
}
}
}
func (t *testTargetRetriever) toFactory() func(context.Context) TargetRetriever {
return func(context.Context) TargetRetriever { return t }
}
type testAlertmanagerRetriever struct{}
2017-01-13 01:20:11 -08:00
func (t testAlertmanagerRetriever) Alertmanagers() []*url.URL {
return []*url.URL{
{
Scheme: "http",
Host: "alertmanager.example.com:8080",
Path: "/api/v1/alerts",
},
}
}
func (t testAlertmanagerRetriever) DroppedAlertmanagers() []*url.URL {
return []*url.URL{
{
Scheme: "http",
Host: "dropped.alertmanager.example.com:8080",
Path: "/api/v1/alerts",
},
}
2016-12-02 04:31:43 -08:00
}
func (t testAlertmanagerRetriever) toFactory() func(context.Context) AlertmanagerRetriever {
return func(context.Context) AlertmanagerRetriever { return t }
}
type rulesRetrieverMock struct {
testing *testing.T
}
func (m rulesRetrieverMock) AlertingRules() []*rules.AlertingRule {
expr1, err := parser.ParseExpr(`absent(test_metric3) != 1`)
if err != nil {
m.testing.Fatalf("unable to parse alert expression: %s", err)
}
expr2, err := parser.ParseExpr(`up == 1`)
if err != nil {
m.testing.Fatalf("Unable to parse alert expression: %s", err)
}
rule1 := rules.NewAlertingRule(
"test_metric3",
expr1,
time.Second,
labels.Labels{},
labels.Labels{},
labels.Labels{},
"",
true,
log.NewNopLogger(),
)
rule2 := rules.NewAlertingRule(
"test_metric4",
expr2,
time.Second,
labels.Labels{},
labels.Labels{},
labels.Labels{},
"",
true,
log.NewNopLogger(),
)
var r []*rules.AlertingRule
r = append(r, rule1)
r = append(r, rule2)
return r
}
func (m rulesRetrieverMock) RuleGroups() []*rules.Group {
var ar rulesRetrieverMock
arules := ar.AlertingRules()
storage := teststorage.New(m.testing)
defer storage.Close()
engineOpts := promql.EngineOpts{
Logger: nil,
Reg: nil,
MaxSamples: 10,
Timeout: 100 * time.Second,
}
engine := promql.NewEngine(engineOpts)
opts := &rules.ManagerOptions{
QueryFunc: rules.EngineQueryFunc(engine, storage),
Appendable: storage,
Context: context.Background(),
Logger: log.NewNopLogger(),
}
var r []rules.Rule
for _, alertrule := range arules {
r = append(r, alertrule)
}
recordingExpr, err := parser.ParseExpr(`vector(1)`)
if err != nil {
m.testing.Fatalf("unable to parse alert expression: %s", err)
}
recordingRule := rules.NewRecordingRule("recording-rule-1", recordingExpr, labels.Labels{})
r = append(r, recordingRule)
group := rules.NewGroup(rules.GroupOptions{
Name: "grp",
File: "/path/to/file",
Interval: time.Second,
Rules: r,
ShouldRestore: false,
Opts: opts,
})
return []*rules.Group{group}
}
func (m rulesRetrieverMock) toFactory() func(context.Context) RulesRetriever {
return func(context.Context) RulesRetriever { return m }
}
var samplePrometheusCfg = config.Config{
GlobalConfig: config.GlobalConfig{},
AlertingConfig: config.AlertingConfig{},
RuleFiles: []string{},
ScrapeConfigs: []*config.ScrapeConfig{},
RemoteWriteConfigs: []*config.RemoteWriteConfig{},
RemoteReadConfigs: []*config.RemoteReadConfig{},
}
var sampleFlagMap = map[string]string{
"flag1": "value1",
"flag2": "value2",
}
func TestEndpoints(t *testing.T) {
t.Skip()
suite, err := promql.NewTest(t, `
load 1m
test_metric1{foo="bar"} 0+100x100
test_metric1{foo="boo"} 1+0x100
test_metric2{foo="boo"} 1+0x100
test_metric3{foo="bar", dup="1"} 1+0x100
test_metric3{foo="boo", dup="1"} 1+0x100
test_metric4{foo="bar", dup="1"} 1+0x100
test_metric4{foo="boo", dup="1"} 1+0x100
test_metric4{foo="boo"} 1+0x100
`)
start := time.Unix(0, 0)
exemplars := []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("__name__", "test_metric3", "foo", "boo", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "abc"),
Value: 10,
Ts: timestamp.FromTime(start.Add(2 * time.Second)),
},
},
},
{
SeriesLabels: labels.FromStrings("__name__", "test_metric4", "foo", "bar", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "lul"),
Value: 10,
Ts: timestamp.FromTime(start.Add(4 * time.Second)),
},
},
},
{
SeriesLabels: labels.FromStrings("__name__", "test_metric3", "foo", "boo", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "abc2"),
Value: 10,
Ts: timestamp.FromTime(start.Add(4053 * time.Millisecond)),
},
},
},
{
SeriesLabels: labels.FromStrings("__name__", "test_metric4", "foo", "bar", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "lul2"),
Value: 10,
Ts: timestamp.FromTime(start.Add(4153 * time.Millisecond)),
},
},
},
}
for _, ed := range exemplars {
suite.ExemplarStorage().AppendExemplar(0, ed.SeriesLabels, ed.Exemplars[0])
require.NoError(t, err, "failed to add exemplar: %+v", ed.Exemplars[0])
}
require.NoError(t, err)
defer suite.Close()
require.NoError(t, suite.Run())
now := time.Now()
2016-12-02 04:31:43 -08:00
t.Run("local", func(t *testing.T) {
var algr rulesRetrieverMock
algr.testing = t
algr.AlertingRules()
algr.RuleGroups()
testTargetRetriever := setupTestTargetRetriever(t)
api := &API{
Queryable: suite.Storage(),
QueryEngine: suite.QueryEngine(),
ExemplarQueryable: suite.ExemplarQueryable(),
targetRetriever: testTargetRetriever.toFactory(),
alertmanagerRetriever: testAlertmanagerRetriever{}.toFactory(),
flagsMap: sampleFlagMap,
now: func() time.Time { return now },
config: func() config.Config { return samplePrometheusCfg },
ready: func(f http.HandlerFunc) http.HandlerFunc { return f },
rulesRetriever: algr.toFactory(),
}
testEndpoints(t, api, testTargetRetriever, suite.ExemplarStorage(), true)
})
2017-01-13 01:20:11 -08:00
// Run all the API tests against a API that is wired to forward queries via
// the remote read client to a test server, which in turn sends them to the
// data from the test suite.
t.Run("remote", func(t *testing.T) {
server := setupRemote(suite.Storage())
defer server.Close()
u, err := url.Parse(server.URL)
require.NoError(t, err)
al := promlog.AllowedLevel{}
require.NoError(t, al.Set("debug"))
af := promlog.AllowedFormat{}
require.NoError(t, af.Set("logfmt"))
promlogConfig := promlog.Config{
Level: &al,
Format: &af,
}
Tail the TSDB WAL for remote_write This change switches the remote_write API to use the TSDB WAL. This should reduce memory usage and prevent sample loss when the remote end point is down. We use the new LiveReader from TSDB to tail WAL segments. Logic for finding the tracking segment is included in this PR. The WAL is tailed once for each remote_write endpoint specified. Reading from the segment is based on a ticker rather than relying on fsnotify write events, which were found to be complicated and unreliable in early prototypes. Enqueuing a sample for sending via remote_write can now block, to provide back pressure. Queues are still required to acheive parallelism and batching. We have updated the queue config based on new defaults for queue capacity and pending samples values - much smaller values are now possible. The remote_write resharding code has been updated to prevent deadlocks, and extra tests have been added for these cases. As part of this change, we attempt to guarantee that samples are not lost; however this initial version doesn't guarantee this across Prometheus restarts or non-retryable errors from the remote end (eg 400s). This changes also includes the following optimisations: - only marshal the proto request once, not once per retry - maintain a single copy of the labels for given series to reduce GC pressure Other minor tweaks: - only reshard if we've also successfully sent recently - add pending samples, latest sent timestamp, WAL events processed metrics Co-authored-by: Chris Marchbanks <csmarchbanks.com> (initial prototype) Co-authored-by: Tom Wilkie <tom.wilkie@gmail.com> (sharding changes) Signed-off-by: Callum Styan <callumstyan@gmail.com>
2018-09-07 14:26:04 -07:00
dbDir, err := ioutil.TempDir("", "tsdb-api-ready")
require.NoError(t, err)
Tail the TSDB WAL for remote_write This change switches the remote_write API to use the TSDB WAL. This should reduce memory usage and prevent sample loss when the remote end point is down. We use the new LiveReader from TSDB to tail WAL segments. Logic for finding the tracking segment is included in this PR. The WAL is tailed once for each remote_write endpoint specified. Reading from the segment is based on a ticker rather than relying on fsnotify write events, which were found to be complicated and unreliable in early prototypes. Enqueuing a sample for sending via remote_write can now block, to provide back pressure. Queues are still required to acheive parallelism and batching. We have updated the queue config based on new defaults for queue capacity and pending samples values - much smaller values are now possible. The remote_write resharding code has been updated to prevent deadlocks, and extra tests have been added for these cases. As part of this change, we attempt to guarantee that samples are not lost; however this initial version doesn't guarantee this across Prometheus restarts or non-retryable errors from the remote end (eg 400s). This changes also includes the following optimisations: - only marshal the proto request once, not once per retry - maintain a single copy of the labels for given series to reduce GC pressure Other minor tweaks: - only reshard if we've also successfully sent recently - add pending samples, latest sent timestamp, WAL events processed metrics Co-authored-by: Chris Marchbanks <csmarchbanks.com> (initial prototype) Co-authored-by: Tom Wilkie <tom.wilkie@gmail.com> (sharding changes) Signed-off-by: Callum Styan <callumstyan@gmail.com>
2018-09-07 14:26:04 -07:00
defer os.RemoveAll(dbDir)
remote := remote.NewStorage(promlog.New(&promlogConfig), prometheus.DefaultRegisterer, func() (int64, error) {
return 0, nil
}, dbDir, 1*time.Second, nil)
err = remote.ApplyConfig(&config.Config{
RemoteReadConfigs: []*config.RemoteReadConfig{
{
URL: &config_util.URL{URL: u},
RemoteTimeout: model.Duration(1 * time.Second),
ReadRecent: true,
},
},
})
require.NoError(t, err)
var algr rulesRetrieverMock
algr.testing = t
algr.AlertingRules()
algr.RuleGroups()
testTargetRetriever := setupTestTargetRetriever(t)
api := &API{
Queryable: remote,
QueryEngine: suite.QueryEngine(),
ExemplarQueryable: suite.ExemplarQueryable(),
targetRetriever: testTargetRetriever.toFactory(),
alertmanagerRetriever: testAlertmanagerRetriever{}.toFactory(),
flagsMap: sampleFlagMap,
now: func() time.Time { return now },
config: func() config.Config { return samplePrometheusCfg },
ready: func(f http.HandlerFunc) http.HandlerFunc { return f },
rulesRetriever: algr.toFactory(),
}
testEndpoints(t, api, testTargetRetriever, suite.ExemplarStorage(), false)
})
}
func TestLabelNames(t *testing.T) {
// TestEndpoints doesn't have enough label names to test api.labelNames
// endpoint properly. Hence we test it separately.
suite, err := promql.NewTest(t, `
load 1m
test_metric1{foo1="bar", baz="abc"} 0+100x100
test_metric1{foo2="boo"} 1+0x100
test_metric2{foo="boo"} 1+0x100
test_metric2{foo="boo", xyz="qwerty"} 1+0x100
LabelNames API with matchers (#9083) * Push the matchers for LabelNames all the way into the index. NB This doesn't actually implement it in the index, just plumbs it through for now... Signed-off-by: Tom Wilkie <tom@grafana.com> * Hack it up. Does not work. Signed-off-by: Tom Wilkie <tom@grafana.com> * Revert changes I don't understand Can't see why do we need to hold a mutex on symbols, and the purpose of the LabelNamesFor method. Maybe I'll need to re-add this later. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement LabelNamesFor This method provides the label names that appear in the postings provided. We do that deeper than the label values because we know beforehand that most of the label names we'll be the same across different postings, and we don't want to go down an up looking up the same symbols for all different series. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Mutex on symbols should be unlocked However, I still don't understand why do we need a mutex here. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix head.LabelNamesFor Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement mockIndex LabelNames with matchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Nitpick on slice initialisation Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add tests for LabelNamesWithMatchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix the mutex mess on head.LabelValues/LabelNames I still don't see why we need to grab that unrelated mutex, but at least now we're grabbing it consistently Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Check error after iterating postings Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use the error from posting when there was en error in postings Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Update storage/interface.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go warpped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Remove unneeded comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add testcases for LabelNames w/matchers in api.go Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use t.Cleanup() instead of defer in tests Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Tom Wilkie <tom@grafana.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-20 05:38:08 -07:00
test_metric2{foo="baz", abc="qwerty"} 1+0x100
`)
require.NoError(t, err)
defer suite.Close()
require.NoError(t, suite.Run())
api := &API{
Queryable: suite.Storage(),
}
LabelNames API with matchers (#9083) * Push the matchers for LabelNames all the way into the index. NB This doesn't actually implement it in the index, just plumbs it through for now... Signed-off-by: Tom Wilkie <tom@grafana.com> * Hack it up. Does not work. Signed-off-by: Tom Wilkie <tom@grafana.com> * Revert changes I don't understand Can't see why do we need to hold a mutex on symbols, and the purpose of the LabelNamesFor method. Maybe I'll need to re-add this later. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement LabelNamesFor This method provides the label names that appear in the postings provided. We do that deeper than the label values because we know beforehand that most of the label names we'll be the same across different postings, and we don't want to go down an up looking up the same symbols for all different series. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Mutex on symbols should be unlocked However, I still don't understand why do we need a mutex here. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix head.LabelNamesFor Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement mockIndex LabelNames with matchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Nitpick on slice initialisation Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add tests for LabelNamesWithMatchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix the mutex mess on head.LabelValues/LabelNames I still don't see why we need to grab that unrelated mutex, but at least now we're grabbing it consistently Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Check error after iterating postings Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use the error from posting when there was en error in postings Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Update storage/interface.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go warpped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Remove unneeded comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add testcases for LabelNames w/matchers in api.go Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use t.Cleanup() instead of defer in tests Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Tom Wilkie <tom@grafana.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-20 05:38:08 -07:00
request := func(method string, matchers ...string) (*http.Request, error) {
u, err := url.Parse("http://example.com")
require.NoError(t, err)
q := u.Query()
for _, matcher := range matchers {
q.Add("match[]", matcher)
}
u.RawQuery = q.Encode()
r, err := http.NewRequest(method, u.String(), nil)
if method == http.MethodPost {
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
}
LabelNames API with matchers (#9083) * Push the matchers for LabelNames all the way into the index. NB This doesn't actually implement it in the index, just plumbs it through for now... Signed-off-by: Tom Wilkie <tom@grafana.com> * Hack it up. Does not work. Signed-off-by: Tom Wilkie <tom@grafana.com> * Revert changes I don't understand Can't see why do we need to hold a mutex on symbols, and the purpose of the LabelNamesFor method. Maybe I'll need to re-add this later. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement LabelNamesFor This method provides the label names that appear in the postings provided. We do that deeper than the label values because we know beforehand that most of the label names we'll be the same across different postings, and we don't want to go down an up looking up the same symbols for all different series. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Mutex on symbols should be unlocked However, I still don't understand why do we need a mutex here. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix head.LabelNamesFor Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement mockIndex LabelNames with matchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Nitpick on slice initialisation Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add tests for LabelNamesWithMatchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix the mutex mess on head.LabelValues/LabelNames I still don't see why we need to grab that unrelated mutex, but at least now we're grabbing it consistently Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Check error after iterating postings Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use the error from posting when there was en error in postings Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Update storage/interface.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go warpped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Remove unneeded comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add testcases for LabelNames w/matchers in api.go Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use t.Cleanup() instead of defer in tests Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Tom Wilkie <tom@grafana.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-20 05:38:08 -07:00
return r, err
}
LabelNames API with matchers (#9083) * Push the matchers for LabelNames all the way into the index. NB This doesn't actually implement it in the index, just plumbs it through for now... Signed-off-by: Tom Wilkie <tom@grafana.com> * Hack it up. Does not work. Signed-off-by: Tom Wilkie <tom@grafana.com> * Revert changes I don't understand Can't see why do we need to hold a mutex on symbols, and the purpose of the LabelNamesFor method. Maybe I'll need to re-add this later. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement LabelNamesFor This method provides the label names that appear in the postings provided. We do that deeper than the label values because we know beforehand that most of the label names we'll be the same across different postings, and we don't want to go down an up looking up the same symbols for all different series. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Mutex on symbols should be unlocked However, I still don't understand why do we need a mutex here. Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix head.LabelNamesFor Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Implement mockIndex LabelNames with matchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Nitpick on slice initialisation Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add tests for LabelNamesWithMatchers Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Fix the mutex mess on head.LabelValues/LabelNames I still don't see why we need to grab that unrelated mutex, but at least now we're grabbing it consistently Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Check error after iterating postings Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use the error from posting when there was en error in postings Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Update storage/interface.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go wrapped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Update tsdb/index/index.go warpped error msg Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * Remove unneeded comment Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Add testcases for LabelNames w/matchers in api.go Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> * Use t.Cleanup() instead of defer in tests Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com> Co-authored-by: Tom Wilkie <tom@grafana.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-20 05:38:08 -07:00
for _, tc := range []struct {
name string
matchers []string
expected []string
}{
{
name: "no matchers",
expected: []string{"__name__", "abc", "baz", "foo", "foo1", "foo2", "xyz"},
},
{
name: "non empty label matcher",
matchers: []string{`{foo=~".+"}`},
expected: []string{"__name__", "abc", "foo", "xyz"},
},
{
name: "exact label matcher",
matchers: []string{`{foo="boo"}`},
expected: []string{"__name__", "foo", "xyz"},
},
{
name: "two matchers",
matchers: []string{`{foo="boo"}`, `{foo="baz"}`},
expected: []string{"__name__", "abc", "foo", "xyz"},
},
} {
t.Run(tc.name, func(t *testing.T) {
for _, method := range []string{http.MethodGet, http.MethodPost} {
ctx := context.Background()
req, err := request(method, tc.matchers...)
require.NoError(t, err)
res := api.labelNames(req.WithContext(ctx))
assertAPIError(t, res.err, "")
assertAPIResponse(t, res.data, tc.expected)
}
})
}
}
func setupTestTargetRetriever(t *testing.T) *testTargetRetriever {
t.Helper()
targets := []*testTargetParams{
{
Identifier: "test",
Labels: labels.FromMap(map[string]string{
model.SchemeLabel: "http",
model.AddressLabel: "example.com:8080",
model.MetricsPathLabel: "/metrics",
model.JobLabel: "test",
model.ScrapeIntervalLabel: "15s",
model.ScrapeTimeoutLabel: "5s",
}),
DiscoveredLabels: nil,
Params: url.Values{},
Reports: []*testReport{{scrapeStart, 70 * time.Millisecond, nil}},
Active: true,
},
{
Identifier: "blackbox",
Labels: labels.FromMap(map[string]string{
model.SchemeLabel: "http",
model.AddressLabel: "localhost:9115",
model.MetricsPathLabel: "/probe",
model.JobLabel: "blackbox",
model.ScrapeIntervalLabel: "20s",
model.ScrapeTimeoutLabel: "10s",
}),
DiscoveredLabels: nil,
Params: url.Values{"target": []string{"example.com"}},
Reports: []*testReport{{scrapeStart, 100 * time.Millisecond, errors.New("failed")}},
Active: true,
},
{
Identifier: "blackbox",
Labels: nil,
DiscoveredLabels: labels.FromMap(map[string]string{
model.SchemeLabel: "http",
model.AddressLabel: "http://dropped.example.com:9115",
model.MetricsPathLabel: "/probe",
model.JobLabel: "blackbox",
model.ScrapeIntervalLabel: "30s",
model.ScrapeTimeoutLabel: "15s",
}),
Params: url.Values{},
Active: false,
},
}
return newTestTargetRetriever(targets)
}
func setupRemote(s storage.Storage) *httptest.Server {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req, err := remote.DecodeReadRequest(r)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
resp := prompb.ReadResponse{
Results: make([]*prompb.QueryResult, len(req.Queries)),
}
for i, query := range req.Queries {
matchers, err := remote.FromLabelMatchers(query.Matchers)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
var hints *storage.SelectHints
if query.Hints != nil {
hints = &storage.SelectHints{
Start: query.Hints.StartMs,
End: query.Hints.EndMs,
Step: query.Hints.StepMs,
Func: query.Hints.Func,
}
}
querier, err := s.Querier(r.Context(), query.StartTimestampMs, query.EndTimestampMs)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer querier.Close()
*: Consistent Error/Warning handling for SeriesSet iterator: Allowing Async Select (#7251) * Add errors and Warnings to SeriesSet Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Change Querier interface and refactor accordingly Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Refactor promql/engine to propagate warnings at eval stage Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Address review issues Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Make sure all the series from all Selects are pre-advanced Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Address review issues Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Separate merge series sets Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Clean Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Refactor merge querier failure handling Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Refactored and simplified fanout with improvements from incoming chunk iterator PRs. * Secondary logic is hidden, instead of weird failed series set logic we had. * Fanout is well commented * Fanout closing record all errors * MergeQuerier improved API (clearer) * deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false). Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fix formatting Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Fix CI issues Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Added final tests for error handling. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed Brian's comments. * Moved hints in populate to be allocated only when needed. * Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic. * Select after first Next is done will panic. NOTE: in lazySeriesSet in theory we could just panic, I think however we can totally just return error, it will panic in expand anyway. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Utilize errWithWarnings Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Fix recently introduced expansion issue Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Add tests for secondary querier error handling Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Implement lazy merge Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Add name to test cases Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Reorganize Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Address review comments Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Address review comments Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Remove redundant warnings Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> * Fix rebase mistake Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com> Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-06-09 09:57:31 -07:00
set := querier.Select(false, hints, matchers...)
resp.Results[i], _, err = remote.ToQueryResult(set, 1e6)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
if err := remote.EncodeReadResponse(&resp, w); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
})
return httptest.NewServer(handler)
}
func testEndpoints(t *testing.T, api *API, tr *testTargetRetriever, es storage.ExemplarStorage, testLabelAPI bool) {
start := time.Unix(0, 0)
type targetMetadata struct {
identifier string
metadata []scrape.MetricMetadata
}
type test struct {
endpoint apiFunc
params map[string]string
query url.Values
response interface{}
responseLen int
errType errorType
sorter func(interface{})
metadata []targetMetadata
exemplars []exemplar.QueryResult
}
var tests = []test{
{
endpoint: api.query,
query: url.Values{
"query": []string{"2"},
"time": []string{"123.4"},
},
response: &queryData{
ResultType: parser.ValueTypeScalar,
Result: promql.Scalar{
V: 2,
T: timestamp.FromTime(start.Add(123*time.Second + 400*time.Millisecond)),
},
},
},
{
endpoint: api.query,
query: url.Values{
"query": []string{"0.333"},
"time": []string{"1970-01-01T00:02:03Z"},
},
response: &queryData{
ResultType: parser.ValueTypeScalar,
Result: promql.Scalar{
V: 0.333,
T: timestamp.FromTime(start.Add(123 * time.Second)),
},
},
},
{
endpoint: api.query,
query: url.Values{
"query": []string{"0.333"},
"time": []string{"1970-01-01T01:02:03+01:00"},
},
response: &queryData{
ResultType: parser.ValueTypeScalar,
Result: promql.Scalar{
V: 0.333,
T: timestamp.FromTime(start.Add(123 * time.Second)),
},
},
},
{
endpoint: api.query,
query: url.Values{
"query": []string{"0.333"},
},
response: &queryData{
ResultType: parser.ValueTypeScalar,
Result: promql.Scalar{
V: 0.333,
T: timestamp.FromTime(api.now()),
},
},
},
2015-06-09 04:44:49 -07:00
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"0"},
"end": []string{"2"},
"step": []string{"1"},
},
response: &queryData{
ResultType: parser.ValueTypeMatrix,
Result: promql.Matrix{
promql.Series{
Points: []promql.Point{
{V: 0, T: timestamp.FromTime(start)},
{V: 1, T: timestamp.FromTime(start.Add(1 * time.Second))},
{V: 2, T: timestamp.FromTime(start.Add(2 * time.Second))},
2015-06-09 04:44:49 -07:00
},
Metric: nil,
2015-06-09 04:44:49 -07:00
},
},
},
},
// Missing query params in range queries.
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"end": []string{"2"},
"step": []string{"1"},
},
errType: errorBadData,
},
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"0"},
"step": []string{"1"},
},
errType: errorBadData,
},
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"0"},
"end": []string{"2"},
},
errType: errorBadData,
},
// Bad query expression.
{
endpoint: api.query,
query: url.Values{
"query": []string{"invalid][query"},
"time": []string{"1970-01-01T01:02:03+01:00"},
},
errType: errorBadData,
},
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"invalid][query"},
"start": []string{"0"},
"end": []string{"100"},
"step": []string{"1"},
},
errType: errorBadData,
},
// Invalid step.
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"1"},
"end": []string{"2"},
"step": []string{"0"},
},
errType: errorBadData,
},
// Start after end.
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"2"},
"end": []string{"1"},
"step": []string{"1"},
},
errType: errorBadData,
},
// Start overflows int64 internally.
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"time()"},
"start": []string{"148966367200.372"},
"end": []string{"1489667272.372"},
"step": []string{"1"},
},
errType: errorBadData,
},
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric2", "foo", "boo"),
},
},
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`{foo=""}`},
},
errType: errorBadData,
},
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric1{foo=~".+o"}`},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric1", "foo", "boo"),
},
},
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric1{foo=~".+o$"}`, `test_metric1{foo=~".+o"}`},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric1", "foo", "boo"),
},
},
// Try to overlap the selected series set as much as possible to test the result de-duplication works well.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric4{foo=~".+o$"}`, `test_metric4{dup=~"^1"}`},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric4", "dup", "1", "foo", "bar"),
labels.FromStrings("__name__", "test_metric4", "dup", "1", "foo", "boo"),
labels.FromStrings("__name__", "test_metric4", "foo", "boo"),
},
},
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric1{foo=~".+o"}`, `none`},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric1", "foo", "boo"),
},
},
// Start and end before series starts.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"-2"},
"end": []string{"-1"},
},
response: []labels.Labels{},
},
// Start and end after series ends.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"100000"},
"end": []string{"100001"},
},
response: []labels.Labels{},
},
// Start before series starts, end after series ends.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"-1"},
"end": []string{"100000"},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric2", "foo", "boo"),
},
},
// Start and end within series.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"1"},
"end": []string{"100"},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric2", "foo", "boo"),
},
},
// Start within series, end after.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"1"},
"end": []string{"100000"},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric2", "foo", "boo"),
},
},
// Start before series, end within series.
{
endpoint: api.series,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"-1"},
"end": []string{"1"},
},
response: []labels.Labels{
labels.FromStrings("__name__", "test_metric2", "foo", "boo"),
},
},
// Missing match[] query params in series requests.
{
endpoint: api.series,
errType: errorBadData,
},
{
endpoint: api.dropSeries,
2017-07-06 05:38:40 -07:00
errType: errorInternal,
},
{
2016-12-02 04:31:43 -08:00
endpoint: api.targets,
response: &TargetDiscovery{
ActiveTargets: []*Target{
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "blackbox",
},
ScrapePool: "blackbox",
ScrapeURL: "http://localhost:9115/probe?target=example.com",
GlobalURL: "http://localhost:9115/probe?target=example.com",
Health: "down",
LastError: "failed: missing port in address",
LastScrape: scrapeStart,
LastScrapeDuration: 0.1,
ScrapeInterval: "20s",
ScrapeTimeout: "10s",
},
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "test",
},
ScrapePool: "test",
ScrapeURL: "http://example.com:8080/metrics",
GlobalURL: "http://example.com:8080/metrics",
Health: "up",
LastError: "",
LastScrape: scrapeStart,
LastScrapeDuration: 0.07,
ScrapeInterval: "15s",
ScrapeTimeout: "5s",
},
},
DroppedTargets: []*DroppedTarget{
{
DiscoveredLabels: map[string]string{
"__address__": "http://dropped.example.com:9115",
"__metrics_path__": "/probe",
"__scheme__": "http",
"job": "blackbox",
"__scrape_interval__": "30s",
"__scrape_timeout__": "15s",
},
},
2016-12-02 04:31:43 -08:00
},
},
},
{
endpoint: api.targets,
query: url.Values{
"state": []string{"any"},
},
response: &TargetDiscovery{
ActiveTargets: []*Target{
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "blackbox",
},
ScrapePool: "blackbox",
ScrapeURL: "http://localhost:9115/probe?target=example.com",
GlobalURL: "http://localhost:9115/probe?target=example.com",
Health: "down",
LastError: "failed: missing port in address",
LastScrape: scrapeStart,
LastScrapeDuration: 0.1,
ScrapeInterval: "20s",
ScrapeTimeout: "10s",
},
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "test",
},
ScrapePool: "test",
ScrapeURL: "http://example.com:8080/metrics",
GlobalURL: "http://example.com:8080/metrics",
Health: "up",
LastError: "",
LastScrape: scrapeStart,
LastScrapeDuration: 0.07,
ScrapeInterval: "15s",
ScrapeTimeout: "5s",
},
},
DroppedTargets: []*DroppedTarget{
{
DiscoveredLabels: map[string]string{
"__address__": "http://dropped.example.com:9115",
"__metrics_path__": "/probe",
"__scheme__": "http",
"job": "blackbox",
"__scrape_interval__": "30s",
"__scrape_timeout__": "15s",
},
},
},
},
},
{
endpoint: api.targets,
query: url.Values{
"state": []string{"active"},
},
response: &TargetDiscovery{
ActiveTargets: []*Target{
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "blackbox",
},
ScrapePool: "blackbox",
ScrapeURL: "http://localhost:9115/probe?target=example.com",
GlobalURL: "http://localhost:9115/probe?target=example.com",
Health: "down",
LastError: "failed: missing port in address",
LastScrape: scrapeStart,
LastScrapeDuration: 0.1,
ScrapeInterval: "20s",
ScrapeTimeout: "10s",
},
{
DiscoveredLabels: map[string]string{},
Labels: map[string]string{
"job": "test",
},
ScrapePool: "test",
ScrapeURL: "http://example.com:8080/metrics",
GlobalURL: "http://example.com:8080/metrics",
Health: "up",
LastError: "",
LastScrape: scrapeStart,
LastScrapeDuration: 0.07,
ScrapeInterval: "15s",
ScrapeTimeout: "5s",
},
},
DroppedTargets: []*DroppedTarget{},
},
},
{
endpoint: api.targets,
query: url.Values{
"state": []string{"Dropped"},
},
response: &TargetDiscovery{
ActiveTargets: []*Target{},
DroppedTargets: []*DroppedTarget{
{
DiscoveredLabels: map[string]string{
"__address__": "http://dropped.example.com:9115",
"__metrics_path__": "/probe",
"__scheme__": "http",
"job": "blackbox",
"__scrape_interval__": "30s",
"__scrape_timeout__": "15s",
},
},
},
2016-12-02 04:31:43 -08:00
},
},
// With a matching metric.
{
endpoint: api.targetMetadata,
query: url.Values{
"metric": []string{"go_threads"},
},
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created.",
Unit: "",
},
},
},
},
response: []metricMetadata{
{
Target: labels.FromMap(map[string]string{
"job": "test",
}),
Help: "Number of OS threads created.",
Type: textparse.MetricTypeGauge,
Unit: "",
},
},
},
// With a matching target.
{
endpoint: api.targetMetadata,
query: url.Values{
"match_target": []string{"{job=\"blackbox\"}"},
},
metadata: []targetMetadata{
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "prometheus_tsdb_storage_blocks_bytes",
Type: textparse.MetricTypeGauge,
Help: "The number of bytes that are currently used for local storage by all blocks.",
Unit: "",
},
},
},
},
response: []metricMetadata{
{
Target: labels.FromMap(map[string]string{
"job": "blackbox",
}),
Metric: "prometheus_tsdb_storage_blocks_bytes",
Help: "The number of bytes that are currently used for local storage by all blocks.",
Type: textparse.MetricTypeGauge,
Unit: "",
},
},
},
// Without a target or metric.
{
endpoint: api.targetMetadata,
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created.",
Unit: "",
},
},
},
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "prometheus_tsdb_storage_blocks_bytes",
Type: textparse.MetricTypeGauge,
Help: "The number of bytes that are currently used for local storage by all blocks.",
Unit: "",
},
},
},
},
response: []metricMetadata{
{
Target: labels.FromMap(map[string]string{
"job": "test",
}),
Metric: "go_threads",
Help: "Number of OS threads created.",
Type: textparse.MetricTypeGauge,
Unit: "",
},
{
Target: labels.FromMap(map[string]string{
"job": "blackbox",
}),
Metric: "prometheus_tsdb_storage_blocks_bytes",
Help: "The number of bytes that are currently used for local storage by all blocks.",
Type: textparse.MetricTypeGauge,
Unit: "",
},
},
sorter: func(m interface{}) {
sort.Slice(m.([]metricMetadata), func(i, j int) bool {
s := m.([]metricMetadata)
return s[i].Metric < s[j].Metric
})
},
},
// Without a matching metric.
{
endpoint: api.targetMetadata,
query: url.Values{
"match_target": []string{"{job=\"non-existentblackbox\"}"},
},
response: []metricMetadata{},
},
{
2017-01-13 01:20:11 -08:00
endpoint: api.alertmanagers,
response: &AlertmanagerDiscovery{
ActiveAlertmanagers: []*AlertmanagerTarget{
{
2017-01-13 01:20:11 -08:00
URL: "http://alertmanager.example.com:8080/api/v1/alerts",
},
},
DroppedAlertmanagers: []*AlertmanagerTarget{
{
URL: "http://dropped.alertmanager.example.com:8080/api/v1/alerts",
},
},
2017-01-13 01:20:11 -08:00
},
},
// With metadata available.
{
endpoint: api.metricMetadata,
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "prometheus_engine_query_duration_seconds",
Type: textparse.MetricTypeSummary,
Help: "Query timings",
Unit: "",
},
{
Metric: "go_info",
Type: textparse.MetricTypeGauge,
Help: "Information about the Go environment.",
Unit: "",
},
},
},
},
response: map[string][]metadata{
"prometheus_engine_query_duration_seconds": {{textparse.MetricTypeSummary, "Query timings", ""}},
"go_info": {{textparse.MetricTypeGauge, "Information about the Go environment.", ""}},
},
},
// With duplicate metadata for a metric that comes from different targets.
{
endpoint: api.metricMetadata,
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
},
},
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
},
},
},
response: map[string][]metadata{
"go_threads": {{textparse.MetricTypeGauge, "Number of OS threads created", ""}},
},
},
// With non-duplicate metadata for the same metric from different targets.
{
endpoint: api.metricMetadata,
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
},
},
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads that were created.",
Unit: "",
},
},
},
},
response: map[string][]metadata{
"go_threads": {
{textparse.MetricTypeGauge, "Number of OS threads created", ""},
{textparse.MetricTypeGauge, "Number of OS threads that were created.", ""},
},
},
sorter: func(m interface{}) {
v := m.(map[string][]metadata)["go_threads"]
sort.Slice(v, func(i, j int) bool {
return v[i].Help < v[j].Help
})
},
},
// With a limit for the number of metrics returned.
{
endpoint: api.metricMetadata,
query: url.Values{
"limit": []string{"2"},
},
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
{
Metric: "prometheus_engine_query_duration_seconds",
Type: textparse.MetricTypeSummary,
Help: "Query Timmings.",
Unit: "",
},
},
},
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "go_gc_duration_seconds",
Type: textparse.MetricTypeSummary,
Help: "A summary of the GC invocation durations.",
Unit: "",
},
},
},
},
responseLen: 2,
},
// When requesting a specific metric that is present.
{
endpoint: api.metricMetadata,
query: url.Values{"metric": []string{"go_threads"}},
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
},
},
{
identifier: "blackbox",
metadata: []scrape.MetricMetadata{
{
Metric: "go_gc_duration_seconds",
Type: textparse.MetricTypeSummary,
Help: "A summary of the GC invocation durations.",
Unit: "",
},
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads that were created.",
Unit: "",
},
},
},
},
response: map[string][]metadata{
"go_threads": {
{textparse.MetricTypeGauge, "Number of OS threads created", ""},
{textparse.MetricTypeGauge, "Number of OS threads that were created.", ""},
},
},
sorter: func(m interface{}) {
v := m.(map[string][]metadata)["go_threads"]
sort.Slice(v, func(i, j int) bool {
return v[i].Help < v[j].Help
})
},
},
// With a specific metric that is not present.
{
endpoint: api.metricMetadata,
query: url.Values{"metric": []string{"go_gc_duration_seconds"}},
metadata: []targetMetadata{
{
identifier: "test",
metadata: []scrape.MetricMetadata{
{
Metric: "go_threads",
Type: textparse.MetricTypeGauge,
Help: "Number of OS threads created",
Unit: "",
},
},
},
},
response: map[string][]metadata{},
},
// With no available metadata.
{
endpoint: api.metricMetadata,
response: map[string][]metadata{},
},
{
endpoint: api.serveConfig,
response: &prometheusConfig{
YAML: samplePrometheusCfg.String(),
},
},
{
endpoint: api.serveFlags,
response: sampleFlagMap,
},
{
endpoint: api.alerts,
response: &AlertDiscovery{
Alerts: []*Alert{},
},
},
{
endpoint: api.rules,
response: &RuleDiscovery{
RuleGroups: []*RuleGroup{
{
Name: "grp",
File: "/path/to/file",
Interval: 1,
Rules: []rule{
alertingRule{
React UI: Implement alerts page (#6402) * url filter rules param Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * address review changes Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * ui initial commit Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * improve ui Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * fix typo in component name Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * create query link + ui enhancements Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * add count to state labels Signed-off-by: blalov <boiskila@gmail.com> * put alerts table render in the right place Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * fix rules endpoint test Signed-off-by: blalov <boiskila@gmail.com> * lint fixes Signed-off-by: blalov <boiskila@gmail.com> * test query params Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * adding down arrow as click indicator in Alert Signed-off-by: blalov <boiskila@gmail.com> * add period at the end of the comment Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * remove left-over css Signed-off-by: blalov <boiskila@gmail.com> * adding expand/collapse arrows on Alert Signed-off-by: blalov <boiskila@gmail.com> * create proper expression for alert name Signed-off-by: blalov <boiskila@gmail.com>
2019-12-09 14:42:59 -08:00
State: "inactive",
Name: "test_metric3",
Query: "absent(test_metric3) != 1",
Duration: 1,
Labels: labels.Labels{},
Annotations: labels.Labels{},
Alerts: []*Alert{},
Health: "unknown",
Type: "alerting",
},
alertingRule{
React UI: Implement alerts page (#6402) * url filter rules param Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * address review changes Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * ui initial commit Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * improve ui Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * fix typo in component name Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * create query link + ui enhancements Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * add count to state labels Signed-off-by: blalov <boiskila@gmail.com> * put alerts table render in the right place Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * fix rules endpoint test Signed-off-by: blalov <boiskila@gmail.com> * lint fixes Signed-off-by: blalov <boiskila@gmail.com> * test query params Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * adding down arrow as click indicator in Alert Signed-off-by: blalov <boiskila@gmail.com> * add period at the end of the comment Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * remove left-over css Signed-off-by: blalov <boiskila@gmail.com> * adding expand/collapse arrows on Alert Signed-off-by: blalov <boiskila@gmail.com> * create proper expression for alert name Signed-off-by: blalov <boiskila@gmail.com>
2019-12-09 14:42:59 -08:00
State: "inactive",
Name: "test_metric4",
Query: "up == 1",
Duration: 1,
Labels: labels.Labels{},
Annotations: labels.Labels{},
Alerts: []*Alert{},
Health: "unknown",
Type: "alerting",
},
recordingRule{
Name: "recording-rule-1",
Query: "vector(1)",
Labels: labels.Labels{},
Health: "unknown",
Type: "recording",
},
},
},
},
},
},
React UI: Implement alerts page (#6402) * url filter rules param Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * address review changes Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * ui initial commit Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * improve ui Signed-off-by: blalov <boiskila@gmail.com> Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * fix typo in component name Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * create query link + ui enhancements Signed-off-by: Boyko Lalov <boiskila@gmail.com> Signed-off-by: blalov <boiskila@gmail.com> * add count to state labels Signed-off-by: blalov <boiskila@gmail.com> * put alerts table render in the right place Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * fix rules endpoint test Signed-off-by: blalov <boiskila@gmail.com> * lint fixes Signed-off-by: blalov <boiskila@gmail.com> * test query params Signed-off-by: blalov <boiskila@gmail.com> * refactoring Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * adding down arrow as click indicator in Alert Signed-off-by: blalov <boiskila@gmail.com> * add period at the end of the comment Signed-off-by: blalov <boiskila@gmail.com> * review changes Signed-off-by: blalov <boiskila@gmail.com> * remove left-over css Signed-off-by: blalov <boiskila@gmail.com> * adding expand/collapse arrows on Alert Signed-off-by: blalov <boiskila@gmail.com> * create proper expression for alert name Signed-off-by: blalov <boiskila@gmail.com>
2019-12-09 14:42:59 -08:00
{
endpoint: api.rules,
query: url.Values{
"type": []string{"alert"},
},
response: &RuleDiscovery{
RuleGroups: []*RuleGroup{
{
Name: "grp",
File: "/path/to/file",
Interval: 1,
Rules: []rule{
alertingRule{
State: "inactive",
Name: "test_metric3",
Query: "absent(test_metric3) != 1",
Duration: 1,
Labels: labels.Labels{},
Annotations: labels.Labels{},
Alerts: []*Alert{},
Health: "unknown",
Type: "alerting",
},
alertingRule{
State: "inactive",
Name: "test_metric4",
Query: "up == 1",
Duration: 1,
Labels: labels.Labels{},
Annotations: labels.Labels{},
Alerts: []*Alert{},
Health: "unknown",
Type: "alerting",
},
},
},
},
},
},
{
endpoint: api.rules,
query: url.Values{
"type": []string{"record"},
},
response: &RuleDiscovery{
RuleGroups: []*RuleGroup{
{
Name: "grp",
File: "/path/to/file",
Interval: 1,
Rules: []rule{
recordingRule{
Name: "recording-rule-1",
Query: "vector(1)",
Labels: labels.Labels{},
Health: "unknown",
Type: "recording",
},
},
},
},
},
},
{
endpoint: api.queryExemplars,
query: url.Values{
"query": []string{`test_metric3{foo="boo"} - test_metric4{foo="bar"}`},
"start": []string{"0"},
"end": []string{"4"},
},
// Note extra integer length of timestamps for exemplars because of millisecond preservation
// of timestamps within Prometheus (see timestamp package).
response: []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("__name__", "test_metric3", "foo", "boo", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "abc"),
Value: 10,
Ts: timestamp.FromTime(start.Add(2 * time.Second)),
},
},
},
{
SeriesLabels: labels.FromStrings("__name__", "test_metric4", "foo", "bar", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "lul"),
Value: 10,
Ts: timestamp.FromTime(start.Add(4 * time.Second)),
},
},
},
},
},
{
endpoint: api.queryExemplars,
query: url.Values{
"query": []string{`{foo="boo"}`},
"start": []string{"4"},
"end": []string{"4.1"},
},
response: []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("__name__", "test_metric3", "foo", "boo", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "abc2"),
Value: 10,
Ts: 4053,
},
},
},
},
},
{
endpoint: api.queryExemplars,
query: url.Values{
"query": []string{`{foo="boo"}`},
},
response: []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("__name__", "test_metric3", "foo", "boo", "dup", "1"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("id", "abc"),
Value: 10,
Ts: 2000,
},
{
Labels: labels.FromStrings("id", "abc2"),
Value: 10,
Ts: 4053,
},
},
},
},
},
{
endpoint: api.queryExemplars,
query: url.Values{
"query": []string{`{__name__="test_metric5"}`},
},
response: []exemplar.QueryResult{},
},
}
if testLabelAPI {
tests = append(tests, []test{
{
endpoint: api.labelValues,
params: map[string]string{
"name": "__name__",
},
response: []string{
"test_metric1",
"test_metric2",
"test_metric3",
"test_metric4",
},
},
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
response: []string{
"bar",
"boo",
},
},
// Bad name parameter.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "not!!!allowed",
},
errType: errorBadData,
},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
// Start and end before LabelValues starts.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"-2"},
"end": []string{"-1"},
},
response: []string{},
},
// Start and end within LabelValues.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"1"},
"end": []string{"100"},
},
response: []string{
"bar",
"boo",
},
},
// Start before LabelValues, end within LabelValues.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"-1"},
"end": []string{"3"},
},
response: []string{
"bar",
"boo",
},
},
// Start before LabelValues starts, end after LabelValues ends.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"1969-12-31T00:00:00Z"},
"end": []string{"1970-02-01T00:02:03Z"},
},
response: []string{
"bar",
"boo",
},
},
// Start with bad data, end within LabelValues.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"boop"},
"end": []string{"1"},
},
errType: errorBadData,
},
// Start within LabelValues, end after.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"1"},
"end": []string{"100000000"},
},
response: []string{
"bar",
"boo",
},
},
// Start and end after LabelValues ends.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"148966367200.372"},
"end": []string{"148966367200.972"},
},
response: []string{},
},
// Only provide Start within LabelValues, don't provide an end time.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"start": []string{"2"},
},
response: []string{
"bar",
"boo",
},
},
// Only provide end within LabelValues, don't provide a start time.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"end": []string{"100"},
},
response: []string{
"bar",
"boo",
},
},
// Label values with bad matchers.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`{foo=""`, `test_metric2`},
},
errType: errorBadData,
},
// Label values with empty matchers.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`{foo=""}`},
},
errType: errorBadData,
},
// Label values with matcher.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`test_metric2`},
},
response: []string{
"boo",
},
},
// Label values with matcher.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`test_metric1`},
},
response: []string{
"bar",
"boo",
},
},
// Label values with matcher using label filter.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`test_metric1{foo="bar"}`},
},
response: []string{
"bar",
},
},
// Label values with matcher and time range.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`test_metric1`},
"start": []string{"1"},
"end": []string{"100000000"},
},
response: []string{
"bar",
"boo",
},
},
Add matchers to LabelValues() call (#8400) * Accept matchers in querier LabelValues() Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * create matcher to only select metrics which have searched label Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * test case for merge querier with matchers Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * test LabelValues with matchers on head Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * add test for LabelValues on block Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * formatting fix Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Add comments Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * add missing lock release Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * remove unused parameter Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Benchmarks for LabelValues() methods on block/head Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Better comment Co-authored-by: Julien Pivotto <roidelapluie@gmail.com> Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * update comment Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * minor refactor make code cleaner Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * better comments Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * fix expected errors in test Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Deleting parameter which can only be empty Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * fix comments Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * remove unnecessary lock Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * only lookup label value if label name was looked up Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Return error when there is one Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Call .Get() on decoder before checking errors Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * only lock head.symMtx when necessary Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * remove unnecessary delete() Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * re-use code instead of duplicating it Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Consistently return error from LabelValueFor() Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * move helper func from util.go to querier.go Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * Fix test expectation Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com> * ensure result de-duplication and sorting works Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> * return named error from LabelValueFor() Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com> Co-authored-by: Julien Pivotto <roidelapluie@gmail.com> Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-02-09 09:38:35 -08:00
// Try to overlap the selected series set as much as possible to test that the value de-duplication works.
{
endpoint: api.labelValues,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`test_metric4{dup=~"^1"}`, `test_metric4{foo=~".+o$"}`},
},
response: []string{
"bar",
"boo",
},
},
// Label names.
{
endpoint: api.labelNames,
response: []string{"__name__", "dup", "foo"},
},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
// Start and end before Label names starts.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"-2"},
"end": []string{"-1"},
},
response: []string{},
},
// Start and end within Label names.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"1"},
"end": []string{"100"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Start before Label names, end within Label names.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"-1"},
"end": []string{"10"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Start before Label names starts, end after Label names ends.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"-1"},
"end": []string{"100000"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Start with bad data for Label names, end within Label names.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"boop"},
"end": []string{"1"},
},
errType: errorBadData,
},
// Start within Label names, end after.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"1"},
"end": []string{"1000000006"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Start and end after Label names ends.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"148966367200.372"},
"end": []string{"148966367200.972"},
},
response: []string{},
},
// Only provide Start within Label names, don't provide an end time.
{
endpoint: api.labelNames,
query: url.Values{
"start": []string{"4"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Only provide End within Label names, don't provide a start time.
{
endpoint: api.labelNames,
query: url.Values{
"end": []string{"20"},
},
response: []string{"__name__", "dup", "foo"},
Added time range parameters to labelNames API (#7288) * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelNames api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * evaluate min/max time range when reading labels from the head Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add time range params to labelValues api Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test, add docs Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * add a test for head min max range Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test to match comment Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * address CR comments Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * combine vars only used once Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * fix test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * restart ci Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com> * use range expectedLabelNames instead of range actualLabelNames in test Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 05:50:09 -07:00
},
// Label names with bad matchers.
{
endpoint: api.labelNames,
query: url.Values{
"match[]": []string{`{foo=""`, `test_metric2`},
},
errType: errorBadData,
},
// Label values with empty matchers.
{
endpoint: api.labelNames,
params: map[string]string{
"name": "foo",
},
query: url.Values{
"match[]": []string{`{foo=""}`},
},
errType: errorBadData,
},
// Label names with matcher.
{
endpoint: api.labelNames,
query: url.Values{
"match[]": []string{`test_metric2`},
},
response: []string{"__name__", "foo"},
},
// Label names with matcher.
{
endpoint: api.labelNames,
query: url.Values{
"match[]": []string{`test_metric3`},
},
response: []string{"__name__", "dup", "foo"},
},
// Label names with matcher using label filter.
// There is no matching series.
{
endpoint: api.labelNames,
query: url.Values{
"match[]": []string{`test_metric1{foo="test"}`},
},
response: []string{},
},
// Label names with matcher and time range.
{
endpoint: api.labelNames,
query: url.Values{
"match[]": []string{`test_metric2`},
"start": []string{"1"},
"end": []string{"100000000"},
},
response: []string{"__name__", "foo"},
},
}...)
}
methods := func(f apiFunc) []string {
fp := reflect.ValueOf(f).Pointer()
if fp == reflect.ValueOf(api.query).Pointer() || fp == reflect.ValueOf(api.queryRange).Pointer() || fp == reflect.ValueOf(api.series).Pointer() {
return []string{http.MethodGet, http.MethodPost}
}
return []string{http.MethodGet}
}
request := func(m string, q url.Values) (*http.Request, error) {
if m == http.MethodPost {
r, err := http.NewRequest(m, "http://example.com", strings.NewReader(q.Encode()))
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
r.RemoteAddr = "127.0.0.1:20201"
return r, err
}
r, err := http.NewRequest(m, fmt.Sprintf("http://example.com?%s", q.Encode()), nil)
r.RemoteAddr = "127.0.0.1:20201"
return r, err
}
for i, test := range tests {
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069) * tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating. Chained to https://github.com/prometheus/prometheus/pull/7059 * NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it. * Added single SeriesEntry / ChunkEntry for all series implementations. * Unified all vertical, and non vertical for compact and querying to single merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before) * Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples. * Refactored endpoint tests and querier tests to include subtests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed comments from Brian and Beorn. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed snapshot test and added chunk iterator support for DBReadOnly. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed race when iterating over Ats first. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed populate block tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed endpoints test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added test & fixed case of head open chunk. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed DBReadOnly tests and bug producing 1 sample chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added cases for partial block overlap for multiple full chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added extra tests for chunk meta after compaction. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed small vertical merge bug and added more tests for that. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 08:03:02 -07:00
t.Run(fmt.Sprintf("run %d %s %q", i, describeAPIFunc(test.endpoint), test.query.Encode()), func(t *testing.T) {
for _, method := range methods(test.endpoint) {
t.Run(method, func(t *testing.T) {
// Build a context with the correct request params.
ctx := context.Background()
for p, v := range test.params {
ctx = route.WithParam(ctx, p, v)
}
req, err := request(method, test.query)
if err != nil {
t.Fatal(err)
}
tr.ResetMetadataStore()
for _, tm := range test.metadata {
tr.SetMetadataStoreForTargets(tm.identifier, &testMetaStore{Metadata: tm.metadata})
}
for _, te := range test.exemplars {
for _, e := range te.Exemplars {
_, err := es.AppendExemplar(0, te.SeriesLabels, e)
if err != nil {
t.Fatal(err)
}
}
}
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069) * tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating. Chained to https://github.com/prometheus/prometheus/pull/7059 * NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it. * Added single SeriesEntry / ChunkEntry for all series implementations. * Unified all vertical, and non vertical for compact and querying to single merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before) * Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples. * Refactored endpoint tests and querier tests to include subtests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed comments from Brian and Beorn. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed snapshot test and added chunk iterator support for DBReadOnly. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed race when iterating over Ats first. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed populate block tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed endpoints test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added test & fixed case of head open chunk. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed DBReadOnly tests and bug producing 1 sample chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added cases for partial block overlap for multiple full chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added extra tests for chunk meta after compaction. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed small vertical merge bug and added more tests for that. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 08:03:02 -07:00
res := test.endpoint(req.WithContext(ctx))
assertAPIError(t, res.err, test.errType)
if test.sorter != nil {
test.sorter(res.data)
}
if test.responseLen != 0 {
assertAPIResponseLength(t, res.data, test.responseLen)
} else {
assertAPIResponse(t, res.data, test.response)
}
})
}
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069) * tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating. Chained to https://github.com/prometheus/prometheus/pull/7059 * NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it. * Added single SeriesEntry / ChunkEntry for all series implementations. * Unified all vertical, and non vertical for compact and querying to single merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before) * Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples. * Refactored endpoint tests and querier tests to include subtests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed comments from Brian and Beorn. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed snapshot test and added chunk iterator support for DBReadOnly. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed race when iterating over Ats first. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed populate block tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed endpoints test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added test & fixed case of head open chunk. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed DBReadOnly tests and bug producing 1 sample chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added cases for partial block overlap for multiple full chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added extra tests for chunk meta after compaction. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed small vertical merge bug and added more tests for that. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 08:03:02 -07:00
})
}
}
tsdb: Added ChunkQueryable implementations to db; unified MergeSeriesSets and vertical to single struct. (#7069) * tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating. Chained to https://github.com/prometheus/prometheus/pull/7059 * NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it. * Added single SeriesEntry / ChunkEntry for all series implementations. * Unified all vertical, and non vertical for compact and querying to single merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before) * Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples. * Refactored endpoint tests and querier tests to include subtests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Addressed comments from Brian and Beorn. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed snapshot test and added chunk iterator support for DBReadOnly. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed race when iterating over Ats first. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed populate block tests. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed endpoints test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed test. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added test & fixed case of head open chunk. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed DBReadOnly tests and bug producing 1 sample chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added cases for partial block overlap for multiple full chunks. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Added extra tests for chunk meta after compaction. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com> * Fixed small vertical merge bug and added more tests for that. Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-31 08:03:02 -07:00
func describeAPIFunc(f apiFunc) string {
name := runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()
return strings.Split(name[strings.LastIndex(name, ".")+1:], "-")[0]
}
func assertAPIError(t *testing.T, got *apiError, exp errorType) {
t.Helper()
if got != nil {
if exp == errorNone {
t.Fatalf("Unexpected error: %s", got)
}
if exp != got.typ {
t.Fatalf("Expected error of type %q but got type %q (%q)", exp, got.typ, got)
}
return
}
if exp != errorNone {
t.Fatalf("Expected error of type %q but got none", exp)
}
}
func assertAPIResponse(t *testing.T, got interface{}, exp interface{}) {
t.Helper()
require.Equal(t, exp, got)
}
func assertAPIResponseLength(t *testing.T, got interface{}, expLen int) {
t.Helper()
gotLen := reflect.ValueOf(got).Len()
if gotLen != expLen {
t.Fatalf(
"Response length does not match, expected:\n%d\ngot:\n%d",
expLen,
gotLen,
)
}
}
type fakeDB struct {
err error
}
func (f *fakeDB) CleanTombstones() error { return f.err }
func (f *fakeDB) Delete(mint, maxt int64, ms ...*labels.Matcher) error { return f.err }
func (f *fakeDB) Snapshot(dir string, withHead bool) error { return f.err }
M-map full chunks of Head from disk (#6679) When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory Prom startup now happens in these stages - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks. - Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series. If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss. [Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks. [The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files. In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file. **Prombench results** _WAL Replay_ 1h Wal reply time 30% less wal reply time - 4m31 vs 3m36 2h Wal reply time 20% less wal reply time - 8m16 vs 7m _Memory During WAL Replay_ High Churn: 10-15% less RAM - 32gb vs 28gb 20% less RAM after compaction 34gb vs 27gb No Churn: 20-30% less RAM - 23gb vs 18gb 40% less RAM after compaction 32.5gb vs 20gb Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932) Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-06 08:30:00 -07:00
func (f *fakeDB) Stats(statsByLabelName string) (_ *tsdb.Stats, retErr error) {
dbDir, err := ioutil.TempDir("", "tsdb-api-ready")
if err != nil {
return nil, err
}
defer func() {
err := os.RemoveAll(dbDir)
if retErr != nil {
retErr = err
}
}()
opts := tsdb.DefaultHeadOptions()
opts.ChunkRange = 1000
React UI: Add Starting Screen (#8662) * Added walreplay API endpoint Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added starting page to react-ui Signed-off-by: Levi Harrison <git@leviharrison.dev> * Documented the new endpoint Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed typos Signed-off-by: Levi Harrison <git@leviharrison.dev> Co-authored-by: Julius Volz <julius.volz@gmail.com> * Removed logo Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed isResponding to isUnexpected Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed width of progress bar Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed width of progress bar Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added DB stats object Signed-off-by: Levi Harrison <git@leviharrison.dev> * Updated starting page to work with new fields Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 2) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 3) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (and also implementing a method this time) (pt. 4) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (and also implementing a method this time) (pt. 5) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed const to let Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 6) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Remove SetStats method Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added comma Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed api Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed to triple equals Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed data response types Signed-off-by: Levi Harrison <git@leviharrison.dev> * Don't return pointer Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed version Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed interface issue Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed pointer Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed copying lock value error Signed-off-by: Levi Harrison <git@leviharrison.dev> Co-authored-by: Julius Volz <julius.volz@gmail.com>
2021-06-05 07:29:32 -07:00
h, _ := tsdb.NewHead(nil, nil, nil, opts, nil)
return h.Stats(statsByLabelName), nil
}
React UI: Add Starting Screen (#8662) * Added walreplay API endpoint Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added starting page to react-ui Signed-off-by: Levi Harrison <git@leviharrison.dev> * Documented the new endpoint Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed typos Signed-off-by: Levi Harrison <git@leviharrison.dev> Co-authored-by: Julius Volz <julius.volz@gmail.com> * Removed logo Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed isResponding to isUnexpected Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed width of progress bar Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed width of progress bar Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added DB stats object Signed-off-by: Levi Harrison <git@leviharrison.dev> * Updated starting page to work with new fields Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 2) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 3) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (and also implementing a method this time) (pt. 4) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (and also implementing a method this time) (pt. 5) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed const to let Signed-off-by: Levi Harrison <git@leviharrison.dev> * Passing nil (pt. 6) Signed-off-by: Levi Harrison <git@leviharrison.dev> * Remove SetStats method Signed-off-by: Levi Harrison <git@leviharrison.dev> * Added comma Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed api Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed to triple equals Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed data response types Signed-off-by: Levi Harrison <git@leviharrison.dev> * Don't return pointer Signed-off-by: Levi Harrison <git@leviharrison.dev> * Changed version Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed interface issue Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed pointer Signed-off-by: Levi Harrison <git@leviharrison.dev> * Fixed copying lock value error Signed-off-by: Levi Harrison <git@leviharrison.dev> Co-authored-by: Julius Volz <julius.volz@gmail.com>
2021-06-05 07:29:32 -07:00
func (f *fakeDB) WALReplayStatus() (tsdb.WALReplayStatus, error) {
return tsdb.WALReplayStatus{}, nil
}
func TestAdminEndpoints(t *testing.T) {
tsdb, tsdbWithError, tsdbNotReady := &fakeDB{}, &fakeDB{err: errors.New("some error")}, &fakeDB{err: errors.Wrap(tsdb.ErrNotReady, "wrap")}
snapshotAPI := func(api *API) apiFunc { return api.snapshot }
cleanAPI := func(api *API) apiFunc { return api.cleanTombstones }
deleteAPI := func(api *API) apiFunc { return api.deleteSeries }
for _, tc := range []struct {
db *fakeDB
enableAdmin bool
endpoint func(api *API) apiFunc
method string
values url.Values
errType errorType
}{
// Tests for the snapshot endpoint.
{
db: tsdb,
enableAdmin: false,
endpoint: snapshotAPI,
errType: errorUnavailable,
},
{
db: tsdb,
enableAdmin: true,
endpoint: snapshotAPI,
errType: errorNone,
},
{
db: tsdb,
enableAdmin: true,
endpoint: snapshotAPI,
values: map[string][]string{"skip_head": {"true"}},
errType: errorNone,
},
{
db: tsdb,
enableAdmin: true,
endpoint: snapshotAPI,
values: map[string][]string{"skip_head": {"xxx"}},
errType: errorBadData,
},
{
db: tsdbWithError,
enableAdmin: true,
endpoint: snapshotAPI,
errType: errorInternal,
},
{
db: tsdbNotReady,
enableAdmin: true,
endpoint: snapshotAPI,
errType: errorUnavailable,
},
// Tests for the cleanTombstones endpoint.
{
db: tsdb,
enableAdmin: false,
endpoint: cleanAPI,
errType: errorUnavailable,
},
{
db: tsdb,
enableAdmin: true,
endpoint: cleanAPI,
errType: errorNone,
},
{
db: tsdbWithError,
enableAdmin: true,
endpoint: cleanAPI,
errType: errorInternal,
},
{
db: tsdbNotReady,
enableAdmin: true,
endpoint: cleanAPI,
errType: errorUnavailable,
},
// Tests for the deleteSeries endpoint.
{
db: tsdb,
enableAdmin: false,
endpoint: deleteAPI,
errType: errorUnavailable,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
errType: errorBadData,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"123"}},
errType: errorBadData,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up"}, "start": {"xxx"}},
errType: errorBadData,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up"}, "end": {"xxx"}},
errType: errorBadData,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up"}},
errType: errorNone,
},
{
db: tsdb,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up{job!=\"foo\"}", "{job=~\"bar.+\"}", "up{instance!~\"fred.+\"}"}},
errType: errorNone,
},
{
db: tsdbWithError,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up"}},
errType: errorInternal,
},
{
db: tsdbNotReady,
enableAdmin: true,
endpoint: deleteAPI,
values: map[string][]string{"match[]": {"up"}},
errType: errorUnavailable,
},
} {
tc := tc
t.Run("", func(t *testing.T) {
dir, _ := ioutil.TempDir("", "fakeDB")
defer func() { require.NoError(t, os.RemoveAll(dir)) }()
api := &API{
db: tc.db,
dbDir: dir,
ready: func(f http.HandlerFunc) http.HandlerFunc { return f },
enableAdmin: tc.enableAdmin,
}
endpoint := tc.endpoint(api)
req, err := http.NewRequest(tc.method, fmt.Sprintf("?%s", tc.values.Encode()), nil)
require.NoError(t, err)
res := setUnavailStatusOnTSDBNotReady(endpoint(req))
assertAPIError(t, res.err, tc.errType)
})
}
}
func TestRespondSuccess(t *testing.T) {
2015-07-02 01:37:19 -07:00
s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
api := API{}
api.respond(w, "test", nil)
2015-07-02 01:37:19 -07:00
}))
defer s.Close()
2015-07-02 01:37:19 -07:00
resp, err := http.Get(s.URL)
if err != nil {
t.Fatalf("Error on test request: %s", err)
}
2015-07-02 01:37:19 -07:00
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if err != nil {
2015-07-02 01:37:19 -07:00
t.Fatalf("Error reading response body: %s", err)
}
2015-07-02 01:37:19 -07:00
if resp.StatusCode != 200 {
t.Fatalf("Return code %d expected in success response but got %d", 200, resp.StatusCode)
}
if h := resp.Header.Get("Content-Type"); h != "application/json" {
t.Fatalf("Expected Content-Type %q but got %q", "application/json", h)
}
var res response
if err = json.Unmarshal([]byte(body), &res); err != nil {
t.Fatalf("Error unmarshaling JSON body: %s", err)
}
exp := &response{
Status: statusSuccess,
Data: "test",
}
require.Equal(t, exp, &res)
}
func TestRespondError(t *testing.T) {
2015-07-02 01:37:19 -07:00
s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
api := API{}
api.respondError(w, &apiError{errorTimeout, errors.New("message")}, "test")
2015-07-02 01:37:19 -07:00
}))
defer s.Close()
2015-07-02 01:37:19 -07:00
resp, err := http.Get(s.URL)
if err != nil {
t.Fatalf("Error on test request: %s", err)
}
2015-07-02 01:37:19 -07:00
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if err != nil {
2015-07-02 01:37:19 -07:00
t.Fatalf("Error reading response body: %s", err)
}
if want, have := http.StatusServiceUnavailable, resp.StatusCode; want != have {
t.Fatalf("Return code %d expected in error response but got %d", want, have)
2015-07-02 01:37:19 -07:00
}
if h := resp.Header.Get("Content-Type"); h != "application/json" {
t.Fatalf("Expected Content-Type %q but got %q", "application/json", h)
}
var res response
if err = json.Unmarshal([]byte(body), &res); err != nil {
t.Fatalf("Error unmarshaling JSON body: %s", err)
}
exp := &response{
Status: statusError,
Data: "test",
ErrorType: errorTimeout,
Error: "message",
}
require.Equal(t, exp, &res)
}
func TestParseTimeParam(t *testing.T) {
type resultType struct {
asTime time.Time
asError func() error
}
ts, err := parseTime("1582468023986")
require.NoError(t, err)
var tests = []struct {
paramName string
paramValue string
defaultValue time.Time
result resultType
}{
{ // When data is valid.
paramName: "start",
paramValue: "1582468023986",
defaultValue: minTime,
result: resultType{
asTime: ts,
asError: nil,
},
},
{ // When data is empty string.
paramName: "end",
paramValue: "",
defaultValue: maxTime,
result: resultType{
asTime: maxTime,
asError: nil,
},
},
{ // When data is not valid.
paramName: "foo",
paramValue: "baz",
defaultValue: maxTime,
result: resultType{
asTime: time.Time{},
asError: func() error {
_, err := parseTime("baz")
return errors.Wrapf(err, "Invalid time value for '%s'", "foo")
},
},
},
}
for _, test := range tests {
req, err := http.NewRequest("GET", "localhost:42/foo?"+test.paramName+"="+test.paramValue, nil)
require.NoError(t, err)
result := test.result
asTime, err := parseTimeParam(req, test.paramName, test.defaultValue)
if err != nil {
require.EqualError(t, err, result.asError().Error())
} else {
require.True(t, asTime.Equal(result.asTime), "time as return value: %s not parsed correctly. Expected %s. Actual %s", test.paramValue, result.asTime, asTime)
}
}
}
func TestParseTime(t *testing.T) {
ts, err := time.Parse(time.RFC3339Nano, "2015-06-03T13:21:58.555Z")
if err != nil {
panic(err)
}
var tests = []struct {
input string
fail bool
result time.Time
}{
{
input: "",
fail: true,
}, {
input: "abc",
fail: true,
}, {
input: "30s",
fail: true,
}, {
input: "123",
result: time.Unix(123, 0),
}, {
input: "123.123",
result: time.Unix(123, 123000000),
}, {
input: "2015-06-03T13:21:58.555Z",
result: ts,
}, {
input: "2015-06-03T14:21:58.555+01:00",
result: ts,
}, {
// Test float rounding.
input: "1543578564.705",
result: time.Unix(1543578564, 705*1e6),
},
{
input: minTime.Format(time.RFC3339Nano),
result: minTime,
},
{
input: maxTime.Format(time.RFC3339Nano),
result: maxTime,
},
}
for _, test := range tests {
ts, err := parseTime(test.input)
if err != nil && !test.fail {
t.Errorf("Unexpected error for %q: %s", test.input, err)
continue
}
if err == nil && test.fail {
t.Errorf("Expected error for %q but got none", test.input)
continue
}
if !test.fail && !ts.Equal(test.result) {
t.Errorf("Expected time %v for input %q but got %v", test.result, test.input, ts)
}
}
}
func TestParseDuration(t *testing.T) {
var tests = []struct {
input string
fail bool
result time.Duration
}{
{
input: "",
fail: true,
}, {
input: "abc",
fail: true,
}, {
input: "2015-06-03T13:21:58.555Z",
fail: true,
}, {
// Internal int64 overflow.
input: "-148966367200.372",
fail: true,
}, {
// Internal int64 overflow.
input: "148966367200.372",
fail: true,
}, {
input: "123",
result: 123 * time.Second,
}, {
input: "123.333",
result: 123*time.Second + 333*time.Millisecond,
}, {
input: "15s",
result: 15 * time.Second,
}, {
input: "5m",
result: 5 * time.Minute,
},
}
for _, test := range tests {
d, err := parseDuration(test.input)
if err != nil && !test.fail {
t.Errorf("Unexpected error for %q: %s", test.input, err)
continue
}
if err == nil && test.fail {
t.Errorf("Expected error for %q but got none", test.input)
continue
}
if !test.fail && d != test.result {
t.Errorf("Expected duration %v for input %q but got %v", test.result, test.input, d)
}
}
}
func TestOptionsMethod(t *testing.T) {
r := route.New()
api := &API{ready: func(f http.HandlerFunc) http.HandlerFunc { return f }}
api.Register(r)
s := httptest.NewServer(r)
defer s.Close()
req, err := http.NewRequest("OPTIONS", s.URL+"/any_path", nil)
if err != nil {
t.Fatalf("Error creating OPTIONS request: %s", err)
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
t.Fatalf("Error executing OPTIONS request: %s", err)
}
if resp.StatusCode != http.StatusNoContent {
t.Fatalf("Expected status %d, got %d", http.StatusNoContent, resp.StatusCode)
}
}
2018-02-07 07:40:36 -08:00
2018-02-08 09:28:55 -08:00
func TestRespond(t *testing.T) {
cases := []struct {
response interface{}
expected string
}{
{
response: &queryData{
ResultType: parser.ValueTypeMatrix,
2018-02-08 09:28:55 -08:00
Result: promql.Matrix{
promql.Series{
Points: []promql.Point{{V: 1, T: 1000}},
Metric: labels.FromStrings("__name__", "foo"),
},
},
},
expected: `{"status":"success","data":{"resultType":"matrix","result":[{"metric":{"__name__":"foo"},"values":[[1,"1"]]}]}}`,
},
{
response: promql.Point{V: 0, T: 0},
expected: `{"status":"success","data":[0,"0"]}`,
},
{
response: promql.Point{V: 20, T: 1},
expected: `{"status":"success","data":[0.001,"20"]}`,
},
{
response: promql.Point{V: 20, T: 10},
expected: `{"status":"success","data":[0.010,"20"]}`,
2018-02-08 09:28:55 -08:00
},
{
response: promql.Point{V: 20, T: 100},
expected: `{"status":"success","data":[0.100,"20"]}`,
2018-02-08 09:28:55 -08:00
},
{
response: promql.Point{V: 20, T: 1001},
expected: `{"status":"success","data":[1.001,"20"]}`,
},
{
response: promql.Point{V: 20, T: 1010},
expected: `{"status":"success","data":[1.010,"20"]}`,
2018-02-08 09:28:55 -08:00
},
{
response: promql.Point{V: 20, T: 1100},
expected: `{"status":"success","data":[1.100,"20"]}`,
2018-02-08 09:28:55 -08:00
},
{
response: promql.Point{V: 20, T: 12345678123456555},
expected: `{"status":"success","data":[12345678123456.555,"20"]}`,
2018-02-08 09:28:55 -08:00
},
{
response: promql.Point{V: 20, T: -1},
expected: `{"status":"success","data":[-0.001,"20"]}`,
},
{
response: promql.Point{V: math.NaN(), T: 0},
expected: `{"status":"success","data":[0,"NaN"]}`,
},
{
response: promql.Point{V: math.Inf(1), T: 0},
expected: `{"status":"success","data":[0,"+Inf"]}`,
},
{
response: promql.Point{V: math.Inf(-1), T: 0},
expected: `{"status":"success","data":[0,"-Inf"]}`,
},
{
response: promql.Point{V: 1.2345678e6, T: 0},
expected: `{"status":"success","data":[0,"1234567.8"]}`,
},
{
response: promql.Point{V: 1.2345678e-6, T: 0},
expected: `{"status":"success","data":[0,"0.0000012345678"]}`,
},
{
response: promql.Point{V: 1.2345678e-67, T: 0},
expected: `{"status":"success","data":[0,"1.2345678e-67"]}`,
},
{
response: []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("foo", "bar"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("traceID", "abc"),
Value: 100.123,
Ts: 1234,
},
},
},
},
expected: `{"status":"success","data":[{"seriesLabels":{"foo":"bar"},"exemplars":[{"labels":{"traceID":"abc"},"value":"100.123","timestamp":1.234}]}]}`,
},
{
response: []exemplar.QueryResult{
{
SeriesLabels: labels.FromStrings("foo", "bar"),
Exemplars: []exemplar.Exemplar{
{
Labels: labels.FromStrings("traceID", "abc"),
Value: math.Inf(1),
Ts: 1234,
},
},
},
},
expected: `{"status":"success","data":[{"seriesLabels":{"foo":"bar"},"exemplars":[{"labels":{"traceID":"abc"},"value":"+Inf","timestamp":1.234}]}]}`,
},
2018-02-08 09:28:55 -08:00
}
for _, c := range cases {
s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
api := API{}
api.respond(w, c.response, nil)
2018-02-08 09:28:55 -08:00
}))
defer s.Close()
resp, err := http.Get(s.URL)
if err != nil {
t.Fatalf("Error on test request: %s", err)
}
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if err != nil {
t.Fatalf("Error reading response body: %s", err)
}
if string(body) != c.expected {
t.Fatalf("Expected response \n%v\n but got \n%v\n", c.expected, string(body))
}
}
}
func TestTSDBStatus(t *testing.T) {
tsdb := &fakeDB{}
tsdbStatusAPI := func(api *API) apiFunc { return api.serveTSDBStatus }
for i, tc := range []struct {
db *fakeDB
endpoint func(api *API) apiFunc
method string
values url.Values
errType errorType
}{
// Tests for the TSDB Status endpoint.
{
db: tsdb,
endpoint: tsdbStatusAPI,
errType: errorNone,
},
} {
tc := tc
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
api := &API{db: tc.db, gatherer: prometheus.DefaultGatherer}
endpoint := tc.endpoint(api)
req, err := http.NewRequest(tc.method, fmt.Sprintf("?%s", tc.values.Encode()), nil)
if err != nil {
t.Fatalf("Error when creating test request: %s", err)
}
res := endpoint(req)
assertAPIError(t, res.err, tc.errType)
})
}
}
func TestReturnAPIError(t *testing.T) {
cases := []struct {
err error
expected errorType
}{
{
err: promql.ErrStorage{Err: errors.New("storage error")},
expected: errorInternal,
}, {
err: errors.Wrap(promql.ErrStorage{Err: errors.New("storage error")}, "wrapped"),
expected: errorInternal,
}, {
err: promql.ErrQueryTimeout("timeout error"),
expected: errorTimeout,
}, {
err: errors.Wrap(promql.ErrQueryTimeout("timeout error"), "wrapped"),
expected: errorTimeout,
}, {
err: promql.ErrQueryCanceled("canceled error"),
expected: errorCanceled,
}, {
err: errors.Wrap(promql.ErrQueryCanceled("canceled error"), "wrapped"),
expected: errorCanceled,
}, {
err: errors.New("exec error"),
expected: errorExec,
},
}
for _, c := range cases {
actual := returnAPIError(c.err)
require.Error(t, actual)
require.Equal(t, c.expected, actual.typ)
}
}
2018-02-07 07:40:36 -08:00
// This is a global to avoid the benchmark being optimized away.
var testResponseWriter = httptest.ResponseRecorder{}
func BenchmarkRespond(b *testing.B) {
b.ReportAllocs()
points := []promql.Point{}
for i := 0; i < 10000; i++ {
points = append(points, promql.Point{V: float64(i * 1000000), T: int64(i)})
}
response := &queryData{
ResultType: parser.ValueTypeMatrix,
2018-02-07 07:40:36 -08:00
Result: promql.Matrix{
promql.Series{
Points: points,
Metric: nil,
},
},
}
2018-02-08 09:28:55 -08:00
b.ResetTimer()
api := API{}
2018-02-07 07:40:36 -08:00
for n := 0; n < b.N; n++ {
api.respond(&testResponseWriter, response, nil)
2018-02-07 07:40:36 -08:00
}
}
func TestGetGlobalURL(t *testing.T) {
mustParseURL := func(t *testing.T, u string) *url.URL {
parsed, err := url.Parse(u)
require.NoError(t, err)
return parsed
}
testcases := []struct {
input *url.URL
opts GlobalURLOptions
expected *url.URL
errorful bool
}{
{
mustParseURL(t, "http://127.0.0.1:9090"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "127.0.0.1:9090",
Scheme: "http",
},
mustParseURL(t, "http://127.0.0.1:9090"),
false,
},
{
mustParseURL(t, "http://127.0.0.1:9090"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "prometheus.io",
Scheme: "https",
},
mustParseURL(t, "https://prometheus.io"),
false,
},
{
mustParseURL(t, "http://exemple.com"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "prometheus.io",
Scheme: "https",
},
mustParseURL(t, "http://exemple.com"),
false,
},
{
mustParseURL(t, "http://localhost:8080"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "prometheus.io",
Scheme: "https",
},
mustParseURL(t, "http://prometheus.io:8080"),
false,
},
{
mustParseURL(t, "http://[::1]:8080"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "prometheus.io",
Scheme: "https",
},
mustParseURL(t, "http://prometheus.io:8080"),
false,
},
{
mustParseURL(t, "http://localhost"),
GlobalURLOptions{
ListenAddress: "127.0.0.1:9090",
Host: "prometheus.io",
Scheme: "https",
},
mustParseURL(t, "http://prometheus.io"),
false,
},
{
mustParseURL(t, "http://localhost:9091"),
GlobalURLOptions{
ListenAddress: "[::1]:9090",
Host: "[::1]",
Scheme: "https",
},
mustParseURL(t, "http://[::1]:9091"),
false,
},
{
mustParseURL(t, "http://localhost:9091"),
GlobalURLOptions{
ListenAddress: "[::1]:9090",
Host: "[::1]:9090",
Scheme: "https",
},
mustParseURL(t, "http://[::1]:9091"),
false,
},
}
for i, tc := range testcases {
t.Run(fmt.Sprintf("Test %d", i), func(t *testing.T) {
output, err := getGlobalURL(tc.input, tc.opts)
if tc.errorful {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.expected, output)
})
}
}