mirror of
https://github.com/prometheus/prometheus.git
synced 2024-11-12 16:44:05 -08:00
Merge pull request #449 from grafana/yuri/merge-upstream
Merging remotes/prometheus/main into origin/main
This commit is contained in:
commit
fa57574183
22
.github/workflows/buf-lint.yml
vendored
Normal file
22
.github/workflows/buf-lint.yml
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
name: buf.build
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/workflows/buf-lint.yml"
|
||||
- "**.proto"
|
||||
jobs:
|
||||
buf:
|
||||
name: lint
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: bufbuild/buf-setup-action@v1.13.1
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- uses: bufbuild/buf-lint-action@v1
|
||||
with:
|
||||
input: 'prompb'
|
||||
- uses: bufbuild/buf-breaking-action@v1
|
||||
with:
|
||||
input: 'prompb'
|
||||
against: 'https://github.com/prometheus/prometheus.git#branch=main,ref=HEAD,subdir=prompb'
|
25
.github/workflows/buf.yml
vendored
Normal file
25
.github/workflows/buf.yml
vendored
Normal file
|
@ -0,0 +1,25 @@
|
|||
name: buf.build
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
buf:
|
||||
name: lint and publish
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: bufbuild/buf-setup-action@v1.13.1
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- uses: bufbuild/buf-lint-action@v1
|
||||
with:
|
||||
input: 'prompb'
|
||||
- uses: bufbuild/buf-breaking-action@v1
|
||||
with:
|
||||
input: 'prompb'
|
||||
against: 'https://github.com/prometheus/prometheus.git#branch=main,ref=HEAD~1,subdir=prompb'
|
||||
- uses: bufbuild/buf-push-action@v1
|
||||
with:
|
||||
input: 'prompb'
|
||||
buf_token: ${{ secrets.BUF_TOKEN }}
|
5
.gitignore
vendored
5
.gitignore
vendored
|
@ -19,7 +19,7 @@ benchmark.txt
|
|||
!/.promu.yml
|
||||
!/.golangci.yml
|
||||
/documentation/examples/remote_storage/remote_storage_adapter/remote_storage_adapter
|
||||
/documentation/examples/remote_storage/example_write_adapter/example_writer_adapter
|
||||
/documentation/examples/remote_storage/example_write_adapter/example_write_adapter
|
||||
|
||||
npm_licenses.tar.bz2
|
||||
/web/ui/static/react
|
||||
|
@ -28,3 +28,6 @@ npm_licenses.tar.bz2
|
|||
/.build
|
||||
|
||||
/**/node_modules
|
||||
|
||||
# Ignore parser debug
|
||||
y.output
|
||||
|
|
24
CHANGELOG.md
24
CHANGELOG.md
|
@ -1,5 +1,27 @@
|
|||
# Changelog
|
||||
|
||||
## 2.42.0 / 2023-01-31
|
||||
|
||||
This release comes with a bunch of feature coverage for native histograms and breaking changes.
|
||||
|
||||
If you are trying native histograms already, we recommend you remove the `wal` directory when upgrading.
|
||||
Because the old WAL record for native histograms is not backward compatible in v2.42.0, this will lead to some data loss for the latest data.
|
||||
|
||||
Additionally, if you scrape "float histograms" or use recording rules on native histograms in v2.42.0 (which writes float histograms),
|
||||
it is a one-way street since older versions do not support float histograms.
|
||||
|
||||
* [CHANGE] **breaking** TSDB: Changed WAL record format for the experimental native histograms. #11783
|
||||
* [FEATURE] Add 'keep_firing_for' field to alerting rules. #11827
|
||||
* [FEATURE] Promtool: Add support of selecting timeseries for TSDB dump. #11872
|
||||
* [ENHANCEMENT] Agent: Native histogram support. #11842
|
||||
* [ENHANCEMENT] Rules: Support native histograms in recording rules. #11838
|
||||
* [ENHANCEMENT] SD: Add container ID as a meta label for pod targets for Kubernetes. #11844
|
||||
* [ENHANCEMENT] SD: Add VM size label to azure service discovery. #11650
|
||||
* [ENHANCEMENT] Support native histograms in federation. #11830
|
||||
* [ENHANCEMENT] TSDB: Add gauge histogram support. #11783 #11840 #11814
|
||||
* [ENHANCEMENT] TSDB/Scrape: Support FloatHistogram that represents buckets as float64 values. #11522 #11817 #11716
|
||||
* [ENHANCEMENT] UI: Show individual scrape pools on /targets page. #11142
|
||||
|
||||
## 2.41.0 / 2022-12-20
|
||||
|
||||
* [FEATURE] Relabeling: Add `keepequal` and `dropequal` relabel actions. #11564
|
||||
|
@ -61,7 +83,7 @@ Your existing histograms won't switch to native histograms until `NativeHistogra
|
|||
* [ENHANCEMENT] Kubernetes SD: Use protobuf encoding. #11353
|
||||
* [ENHANCEMENT] TSDB: Use golang.org/x/exp/slices for improved sorting speed. #11054 #11318 #11380
|
||||
* [ENHANCEMENT] Consul SD: Add enterprise admin partitions. Adds `__meta_consul_partition` label. Adds `partition` config in `consul_sd_config`. #11482
|
||||
* [BUGFIX] API: Fix API error codes for `/api/v1/labels` and `/api/v1/series`. #11356
|
||||
* [BUGFIX] API: Fix API error codes for `/api/v1/labels` and `/api/v1/series`. #11356
|
||||
|
||||
## 2.39.2 / 2022-11-09
|
||||
|
||||
|
|
|
@ -78,3 +78,20 @@ GO111MODULE=on go mod tidy
|
|||
```
|
||||
|
||||
You have to commit the changes to `go.mod` and `go.sum` before submitting the pull request.
|
||||
|
||||
## Working with the PromQL parser
|
||||
|
||||
The PromQL parser grammar is located in `promql/parser/generated_parser.y` and it can be built using `make parser`.
|
||||
The parser is built using [goyacc](https://pkg.go.dev/golang.org/x/tools/cmd/goyacc)
|
||||
|
||||
If doing some sort of debugging, then it is possible to add some verbose output. After generating the parser, then you
|
||||
can modify the the `./promql/parser/generated_parser.y.go` manually.
|
||||
|
||||
```golang
|
||||
// As of writing this was somewhere around line 600.
|
||||
var (
|
||||
yyDebug = 0 // This can be be a number 0 -> 5.
|
||||
yyErrorVerbose = false // This can be set to true.
|
||||
)
|
||||
|
||||
```
|
||||
|
|
11
Makefile
11
Makefile
|
@ -78,6 +78,17 @@ assets-tarball: assets
|
|||
@echo '>> packaging assets'
|
||||
scripts/package_assets.sh
|
||||
|
||||
# We only want to generate the parser when there's changes to the grammar.
|
||||
.PHONY: parser
|
||||
parser:
|
||||
@echo ">> running goyacc to generate the .go file."
|
||||
ifeq (, $(shell which goyacc))
|
||||
@echo "goyacc not installed so skipping"
|
||||
@echo "To install: go install golang.org/x/tools/cmd/goyacc@v0.6.0"
|
||||
else
|
||||
goyacc -o promql/parser/generated_parser.y.go promql/parser/generated_parser.y
|
||||
endif
|
||||
|
||||
.PHONY: test
|
||||
# If we only want to only test go code we have to change the test target
|
||||
# which is called by all.
|
||||
|
|
|
@ -61,7 +61,7 @@ PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_
|
|||
SKIP_GOLANGCI_LINT :=
|
||||
GOLANGCI_LINT :=
|
||||
GOLANGCI_LINT_OPTS ?=
|
||||
GOLANGCI_LINT_VERSION ?= v1.50.1
|
||||
GOLANGCI_LINT_VERSION ?= v1.51.2
|
||||
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64.
|
||||
# windows isn't included here because of the path separator being different.
|
||||
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin))
|
||||
|
|
|
@ -47,7 +47,7 @@ Release cadence of first pre-releases being cut is 6 weeks.
|
|||
| v2.40 | 2022-11-02 | Ganesh Vernekar (GitHub: @codesome) |
|
||||
| v2.41 | 2022-12-14 | Julien Pivotto (GitHub: @roidelapluie) |
|
||||
| v2.42 | 2023-01-25 | Kemal Akkoyun (GitHub: @kakkoyun) |
|
||||
| v2.43 | 2023-03-08 | **searching for volunteer** |
|
||||
| v2.43 | 2023-03-08 | Julien Pivotto (GitHub: @roidelapluie) |
|
||||
| v2.44 | 2023-04-19 | **searching for volunteer** |
|
||||
|
||||
If you are interested in volunteering please create a pull request against the [prometheus/prometheus](https://github.com/prometheus/prometheus) repository and propose yourself for the release series of your choice.
|
||||
|
|
|
@ -468,6 +468,14 @@ func main() {
|
|||
level.Error(logger).Log("msg", fmt.Sprintf("Error loading config (--config.file=%s)", cfg.configFile), "file", absPath, "err", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
if _, err := cfgFile.GetScrapeConfigs(); err != nil {
|
||||
absPath, pathErr := filepath.Abs(cfg.configFile)
|
||||
if pathErr != nil {
|
||||
absPath = cfg.configFile
|
||||
}
|
||||
level.Error(logger).Log("msg", fmt.Sprintf("Error loading scrape config files from config (--config.file=%q)", cfg.configFile), "file", absPath, "err", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
if cfg.tsdb.EnableExemplarStorage {
|
||||
if cfgFile.StorageConfig.ExemplarsConfig == nil {
|
||||
cfgFile.StorageConfig.ExemplarsConfig = &config.DefaultExemplarsConfig
|
||||
|
@ -730,7 +738,11 @@ func main() {
|
|||
name: "scrape_sd",
|
||||
reloader: func(cfg *config.Config) error {
|
||||
c := make(map[string]discovery.Configs)
|
||||
for _, v := range cfg.ScrapeConfigs {
|
||||
scfgs, err := cfg.GetScrapeConfigs()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, v := range scfgs {
|
||||
c[v.JobName] = v.ServiceDiscoveryConfigs
|
||||
}
|
||||
return discoveryManagerScrape.ApplyConfig(c)
|
||||
|
|
|
@ -45,6 +45,7 @@ import (
|
|||
"gopkg.in/yaml.v2"
|
||||
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
promconfig "github.com/prometheus/common/config"
|
||||
"github.com/prometheus/common/expfmt"
|
||||
|
||||
"github.com/prometheus/prometheus/config"
|
||||
|
@ -74,6 +75,12 @@ const (
|
|||
var lintOptions = []string{lintOptionAll, lintOptionDuplicateRules, lintOptionNone}
|
||||
|
||||
func main() {
|
||||
var (
|
||||
httpRoundTripper = api.DefaultRoundTripper
|
||||
serverURL *url.URL
|
||||
httpConfigFilePath string
|
||||
)
|
||||
|
||||
app := kingpin.New(filepath.Base(os.Args[0]), "Tooling for the Prometheus monitoring system.").UsageWriter(os.Stdout)
|
||||
app.Version(version.Print("promtool"))
|
||||
app.HelpFlag.Short('h')
|
||||
|
@ -124,14 +131,15 @@ func main() {
|
|||
|
||||
queryCmd := app.Command("query", "Run query against a Prometheus server.")
|
||||
queryCmdFmt := queryCmd.Flag("format", "Output format of the query.").Short('o').Default("promql").Enum("promql", "json")
|
||||
queryCmd.Flag("http.config.file", "HTTP client configuration file for promtool to connect to Prometheus.").PlaceHolder("<filename>").ExistingFileVar(&httpConfigFilePath)
|
||||
|
||||
queryInstantCmd := queryCmd.Command("instant", "Run instant query.")
|
||||
queryInstantServer := queryInstantCmd.Arg("server", "Prometheus server to query.").Required().URL()
|
||||
queryInstantCmd.Arg("server", "Prometheus server to query.").Required().URLVar(&serverURL)
|
||||
queryInstantExpr := queryInstantCmd.Arg("expr", "PromQL query expression.").Required().String()
|
||||
queryInstantTime := queryInstantCmd.Flag("time", "Query evaluation time (RFC3339 or Unix timestamp).").String()
|
||||
|
||||
queryRangeCmd := queryCmd.Command("range", "Run range query.")
|
||||
queryRangeServer := queryRangeCmd.Arg("server", "Prometheus server to query.").Required().URL()
|
||||
queryRangeCmd.Arg("server", "Prometheus server to query.").Required().URLVar(&serverURL)
|
||||
queryRangeExpr := queryRangeCmd.Arg("expr", "PromQL query expression.").Required().String()
|
||||
queryRangeHeaders := queryRangeCmd.Flag("header", "Extra headers to send to server.").StringMap()
|
||||
queryRangeBegin := queryRangeCmd.Flag("start", "Query range start time (RFC3339 or Unix timestamp).").String()
|
||||
|
@ -139,7 +147,7 @@ func main() {
|
|||
queryRangeStep := queryRangeCmd.Flag("step", "Query step size (duration).").Duration()
|
||||
|
||||
querySeriesCmd := queryCmd.Command("series", "Run series query.")
|
||||
querySeriesServer := querySeriesCmd.Arg("server", "Prometheus server to query.").Required().URL()
|
||||
querySeriesCmd.Arg("server", "Prometheus server to query.").Required().URLVar(&serverURL)
|
||||
querySeriesMatch := querySeriesCmd.Flag("match", "Series selector. Can be specified multiple times.").Required().Strings()
|
||||
querySeriesBegin := querySeriesCmd.Flag("start", "Start time (RFC3339 or Unix timestamp).").String()
|
||||
querySeriesEnd := querySeriesCmd.Flag("end", "End time (RFC3339 or Unix timestamp).").String()
|
||||
|
@ -153,7 +161,7 @@ func main() {
|
|||
debugAllServer := debugAllCmd.Arg("server", "Prometheus server to get all debug information from.").Required().String()
|
||||
|
||||
queryLabelsCmd := queryCmd.Command("labels", "Run labels query.")
|
||||
queryLabelsServer := queryLabelsCmd.Arg("server", "Prometheus server to query.").Required().URL()
|
||||
queryLabelsCmd.Arg("server", "Prometheus server to query.").Required().URLVar(&serverURL)
|
||||
queryLabelsName := queryLabelsCmd.Arg("name", "Label name to provide label values for.").Required().String()
|
||||
queryLabelsBegin := queryLabelsCmd.Flag("start", "Start time (RFC3339 or Unix timestamp).").String()
|
||||
queryLabelsEnd := queryLabelsCmd.Flag("end", "End time (RFC3339 or Unix timestamp).").String()
|
||||
|
@ -200,7 +208,8 @@ func main() {
|
|||
importFilePath := openMetricsImportCmd.Arg("input file", "OpenMetrics file to read samples from.").Required().String()
|
||||
importDBPath := openMetricsImportCmd.Arg("output directory", "Output directory for generated blocks.").Default(defaultDBPath).String()
|
||||
importRulesCmd := importCmd.Command("rules", "Create blocks of data for new recording rules.")
|
||||
importRulesURL := importRulesCmd.Flag("url", "The URL for the Prometheus API with the data where the rule will be backfilled from.").Default("http://localhost:9090").URL()
|
||||
importRulesCmd.Flag("http.config.file", "HTTP client configuration file for promtool to connect to Prometheus.").PlaceHolder("<filename>").ExistingFileVar(&httpConfigFilePath)
|
||||
importRulesCmd.Flag("url", "The URL for the Prometheus API with the data where the rule will be backfilled from.").Default("http://localhost:9090").URLVar(&serverURL)
|
||||
importRulesStart := importRulesCmd.Flag("start", "The time to start backfilling the new rule from. Must be a RFC3339 formatted date or Unix timestamp. Required.").
|
||||
Required().String()
|
||||
importRulesEnd := importRulesCmd.Flag("end", "If an end time is provided, all recording rules in the rule files provided will be backfilled to the end time. Default will backfill up to 3 hours ago. Must be a RFC3339 formatted date or Unix timestamp.").String()
|
||||
|
@ -224,6 +233,22 @@ func main() {
|
|||
p = &promqlPrinter{}
|
||||
}
|
||||
|
||||
if httpConfigFilePath != "" {
|
||||
if serverURL != nil && serverURL.User.Username() != "" {
|
||||
kingpin.Fatalf("Cannot set base auth in the server URL and use a http.config.file at the same time")
|
||||
}
|
||||
var err error
|
||||
httpConfig, _, err := config_util.LoadHTTPConfigFile(httpConfigFilePath)
|
||||
if err != nil {
|
||||
kingpin.Fatalf("Failed to load HTTP config file: %v", err)
|
||||
}
|
||||
|
||||
httpRoundTripper, err = promconfig.NewRoundTripperFromConfig(*httpConfig, "promtool", config_util.WithUserAgent("promtool/"+version.Version))
|
||||
if err != nil {
|
||||
kingpin.Fatalf("Failed to create a new HTTP round tripper: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
var noDefaultScrapePort bool
|
||||
for _, f := range *featureList {
|
||||
opts := strings.Split(f, ",")
|
||||
|
@ -258,13 +283,13 @@ func main() {
|
|||
os.Exit(CheckMetrics(*checkMetricsExtended))
|
||||
|
||||
case queryInstantCmd.FullCommand():
|
||||
os.Exit(QueryInstant(*queryInstantServer, *queryInstantExpr, *queryInstantTime, p))
|
||||
os.Exit(QueryInstant(serverURL, httpRoundTripper, *queryInstantExpr, *queryInstantTime, p))
|
||||
|
||||
case queryRangeCmd.FullCommand():
|
||||
os.Exit(QueryRange(*queryRangeServer, *queryRangeHeaders, *queryRangeExpr, *queryRangeBegin, *queryRangeEnd, *queryRangeStep, p))
|
||||
os.Exit(QueryRange(serverURL, httpRoundTripper, *queryRangeHeaders, *queryRangeExpr, *queryRangeBegin, *queryRangeEnd, *queryRangeStep, p))
|
||||
|
||||
case querySeriesCmd.FullCommand():
|
||||
os.Exit(QuerySeries(*querySeriesServer, *querySeriesMatch, *querySeriesBegin, *querySeriesEnd, p))
|
||||
os.Exit(QuerySeries(serverURL, httpRoundTripper, *querySeriesMatch, *querySeriesBegin, *querySeriesEnd, p))
|
||||
|
||||
case debugPprofCmd.FullCommand():
|
||||
os.Exit(debugPprof(*debugPprofServer))
|
||||
|
@ -276,7 +301,7 @@ func main() {
|
|||
os.Exit(debugAll(*debugAllServer))
|
||||
|
||||
case queryLabelsCmd.FullCommand():
|
||||
os.Exit(QueryLabels(*queryLabelsServer, *queryLabelsMatch, *queryLabelsName, *queryLabelsBegin, *queryLabelsEnd, p))
|
||||
os.Exit(QueryLabels(serverURL, httpRoundTripper, *queryLabelsMatch, *queryLabelsName, *queryLabelsBegin, *queryLabelsEnd, p))
|
||||
|
||||
case testRulesCmd.FullCommand():
|
||||
os.Exit(RulesUnitTest(
|
||||
|
@ -303,7 +328,7 @@ func main() {
|
|||
os.Exit(backfillOpenMetrics(*importFilePath, *importDBPath, *importHumanReadable, *importQuiet, *maxBlockDuration))
|
||||
|
||||
case importRulesCmd.FullCommand():
|
||||
os.Exit(checkErr(importRules(*importRulesURL, *importRulesStart, *importRulesEnd, *importRulesOutputDir, *importRulesEvalInterval, *maxBlockDuration, *importRulesFiles...)))
|
||||
os.Exit(checkErr(importRules(serverURL, httpRoundTripper, *importRulesStart, *importRulesEnd, *importRulesOutputDir, *importRulesEvalInterval, *maxBlockDuration, *importRulesFiles...)))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -438,7 +463,18 @@ func checkConfig(agentMode bool, filename string, checkSyntaxOnly bool) ([]strin
|
|||
}
|
||||
}
|
||||
|
||||
for _, scfg := range cfg.ScrapeConfigs {
|
||||
var scfgs []*config.ScrapeConfig
|
||||
if checkSyntaxOnly {
|
||||
scfgs = cfg.ScrapeConfigs
|
||||
} else {
|
||||
var err error
|
||||
scfgs, err = cfg.GetScrapeConfigs()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error loading scrape configs: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
for _, scfg := range scfgs {
|
||||
if !checkSyntaxOnly && scfg.HTTPClientConfig.Authorization != nil {
|
||||
if err := checkFileExists(scfg.HTTPClientConfig.Authorization.CredentialsFile); err != nil {
|
||||
return nil, fmt.Errorf("error checking authorization credentials or bearer token file %q: %w", scfg.HTTPClientConfig.Authorization.CredentialsFile, err)
|
||||
|
@ -795,12 +831,13 @@ func checkMetricsExtended(r io.Reader) ([]metricStat, int, error) {
|
|||
}
|
||||
|
||||
// QueryInstant performs an instant query against a Prometheus server.
|
||||
func QueryInstant(url *url.URL, query, evalTime string, p printer) int {
|
||||
func QueryInstant(url *url.URL, roundTripper http.RoundTripper, query, evalTime string, p printer) int {
|
||||
if url.Scheme == "" {
|
||||
url.Scheme = "http"
|
||||
}
|
||||
config := api.Config{
|
||||
Address: url.String(),
|
||||
Address: url.String(),
|
||||
RoundTripper: roundTripper,
|
||||
}
|
||||
|
||||
// Create new client.
|
||||
|
@ -835,12 +872,13 @@ func QueryInstant(url *url.URL, query, evalTime string, p printer) int {
|
|||
}
|
||||
|
||||
// QueryRange performs a range query against a Prometheus server.
|
||||
func QueryRange(url *url.URL, headers map[string]string, query, start, end string, step time.Duration, p printer) int {
|
||||
func QueryRange(url *url.URL, roundTripper http.RoundTripper, headers map[string]string, query, start, end string, step time.Duration, p printer) int {
|
||||
if url.Scheme == "" {
|
||||
url.Scheme = "http"
|
||||
}
|
||||
config := api.Config{
|
||||
Address: url.String(),
|
||||
Address: url.String(),
|
||||
RoundTripper: roundTripper,
|
||||
}
|
||||
|
||||
if len(headers) > 0 {
|
||||
|
@ -848,7 +886,7 @@ func QueryRange(url *url.URL, headers map[string]string, query, start, end strin
|
|||
for key, value := range headers {
|
||||
req.Header.Add(key, value)
|
||||
}
|
||||
return http.DefaultTransport.RoundTrip(req)
|
||||
return roundTripper.RoundTrip(req)
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -908,12 +946,13 @@ func QueryRange(url *url.URL, headers map[string]string, query, start, end strin
|
|||
}
|
||||
|
||||
// QuerySeries queries for a series against a Prometheus server.
|
||||
func QuerySeries(url *url.URL, matchers []string, start, end string, p printer) int {
|
||||
func QuerySeries(url *url.URL, roundTripper http.RoundTripper, matchers []string, start, end string, p printer) int {
|
||||
if url.Scheme == "" {
|
||||
url.Scheme = "http"
|
||||
}
|
||||
config := api.Config{
|
||||
Address: url.String(),
|
||||
Address: url.String(),
|
||||
RoundTripper: roundTripper,
|
||||
}
|
||||
|
||||
// Create new client.
|
||||
|
@ -944,12 +983,13 @@ func QuerySeries(url *url.URL, matchers []string, start, end string, p printer)
|
|||
}
|
||||
|
||||
// QueryLabels queries for label values against a Prometheus server.
|
||||
func QueryLabels(url *url.URL, matchers []string, name, start, end string, p printer) int {
|
||||
func QueryLabels(url *url.URL, roundTripper http.RoundTripper, matchers []string, name, start, end string, p printer) int {
|
||||
if url.Scheme == "" {
|
||||
url.Scheme = "http"
|
||||
}
|
||||
config := api.Config{
|
||||
Address: url.String(),
|
||||
Address: url.String(),
|
||||
RoundTripper: roundTripper,
|
||||
}
|
||||
|
||||
// Create new client.
|
||||
|
@ -1154,7 +1194,7 @@ func (j *jsonPrinter) printLabelValues(v model.LabelValues) {
|
|||
|
||||
// importRules backfills recording rules from the files provided. The output are blocks of data
|
||||
// at the outputDir location.
|
||||
func importRules(url *url.URL, start, end, outputDir string, evalInterval, maxBlockDuration time.Duration, files ...string) error {
|
||||
func importRules(url *url.URL, roundTripper http.RoundTripper, start, end, outputDir string, evalInterval, maxBlockDuration time.Duration, files ...string) error {
|
||||
ctx := context.Background()
|
||||
var stime, etime time.Time
|
||||
var err error
|
||||
|
@ -1184,7 +1224,8 @@ func importRules(url *url.URL, start, end, outputDir string, evalInterval, maxBl
|
|||
maxBlockDuration: maxBlockDuration,
|
||||
}
|
||||
client, err := api.NewClient(api.Config{
|
||||
Address: url.String(),
|
||||
Address: url.String(),
|
||||
RoundTripper: roundTripper,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("new api client error: %w", err)
|
||||
|
|
|
@ -56,14 +56,14 @@ func TestQueryRange(t *testing.T) {
|
|||
require.Equal(t, nil, err)
|
||||
|
||||
p := &promqlPrinter{}
|
||||
exitCode := QueryRange(urlObject, map[string]string{}, "up", "0", "300", 0, p)
|
||||
exitCode := QueryRange(urlObject, http.DefaultTransport, map[string]string{}, "up", "0", "300", 0, p)
|
||||
require.Equal(t, "/api/v1/query_range", getRequest().URL.Path)
|
||||
form := getRequest().Form
|
||||
require.Equal(t, "up", form.Get("query"))
|
||||
require.Equal(t, "1", form.Get("step"))
|
||||
require.Equal(t, 0, exitCode)
|
||||
|
||||
exitCode = QueryRange(urlObject, map[string]string{}, "up", "0", "300", 10*time.Millisecond, p)
|
||||
exitCode = QueryRange(urlObject, http.DefaultTransport, map[string]string{}, "up", "0", "300", 10*time.Millisecond, p)
|
||||
require.Equal(t, "/api/v1/query_range", getRequest().URL.Path)
|
||||
form = getRequest().Form
|
||||
require.Equal(t, "up", form.Get("query"))
|
||||
|
@ -79,7 +79,7 @@ func TestQueryInstant(t *testing.T) {
|
|||
require.Equal(t, nil, err)
|
||||
|
||||
p := &promqlPrinter{}
|
||||
exitCode := QueryInstant(urlObject, "up", "300", p)
|
||||
exitCode := QueryInstant(urlObject, http.DefaultTransport, "up", "300", p)
|
||||
require.Equal(t, "/api/v1/query", getRequest().URL.Path)
|
||||
form := getRequest().Form
|
||||
require.Equal(t, "up", form.Get("query"))
|
||||
|
|
|
@ -47,9 +47,15 @@ func CheckSD(sdConfigFiles, sdJobName string, sdTimeout time.Duration, noDefault
|
|||
}
|
||||
|
||||
var scrapeConfig *config.ScrapeConfig
|
||||
scfgs, err := cfg.GetScrapeConfigs()
|
||||
if err != nil {
|
||||
fmt.Fprintln(os.Stderr, "Cannot load scrape configs", err)
|
||||
return failureExitCode
|
||||
}
|
||||
|
||||
jobs := []string{}
|
||||
jobMatched := false
|
||||
for _, v := range cfg.ScrapeConfigs {
|
||||
for _, v := range scfgs {
|
||||
jobs = append(jobs, v.JobName)
|
||||
if v.JobName == sdJobName {
|
||||
jobMatched = true
|
||||
|
|
132
config/config.go
132
config/config.go
|
@ -216,12 +216,13 @@ var (
|
|||
|
||||
// Config is the top-level configuration for Prometheus's config files.
|
||||
type Config struct {
|
||||
GlobalConfig GlobalConfig `yaml:"global"`
|
||||
AlertingConfig AlertingConfig `yaml:"alerting,omitempty"`
|
||||
RuleFiles []string `yaml:"rule_files,omitempty"`
|
||||
ScrapeConfigs []*ScrapeConfig `yaml:"scrape_configs,omitempty"`
|
||||
StorageConfig StorageConfig `yaml:"storage,omitempty"`
|
||||
TracingConfig TracingConfig `yaml:"tracing,omitempty"`
|
||||
GlobalConfig GlobalConfig `yaml:"global"`
|
||||
AlertingConfig AlertingConfig `yaml:"alerting,omitempty"`
|
||||
RuleFiles []string `yaml:"rule_files,omitempty"`
|
||||
ScrapeConfigFiles []string `yaml:"scrape_config_files,omitempty"`
|
||||
ScrapeConfigs []*ScrapeConfig `yaml:"scrape_configs,omitempty"`
|
||||
StorageConfig StorageConfig `yaml:"storage,omitempty"`
|
||||
TracingConfig TracingConfig `yaml:"tracing,omitempty"`
|
||||
|
||||
RemoteWriteConfigs []*RemoteWriteConfig `yaml:"remote_write,omitempty"`
|
||||
RemoteReadConfigs []*RemoteReadConfig `yaml:"remote_read,omitempty"`
|
||||
|
@ -235,6 +236,9 @@ func (c *Config) SetDirectory(dir string) {
|
|||
for i, file := range c.RuleFiles {
|
||||
c.RuleFiles[i] = config.JoinDir(dir, file)
|
||||
}
|
||||
for i, file := range c.ScrapeConfigFiles {
|
||||
c.ScrapeConfigFiles[i] = config.JoinDir(dir, file)
|
||||
}
|
||||
for _, c := range c.ScrapeConfigs {
|
||||
c.SetDirectory(dir)
|
||||
}
|
||||
|
@ -254,6 +258,58 @@ func (c Config) String() string {
|
|||
return string(b)
|
||||
}
|
||||
|
||||
// ScrapeConfigs returns the scrape configurations.
|
||||
func (c *Config) GetScrapeConfigs() ([]*ScrapeConfig, error) {
|
||||
scfgs := make([]*ScrapeConfig, len(c.ScrapeConfigs))
|
||||
|
||||
jobNames := map[string]string{}
|
||||
for i, scfg := range c.ScrapeConfigs {
|
||||
// We do these checks for library users that would not call Validate in
|
||||
// Unmarshal.
|
||||
if err := scfg.Validate(c.GlobalConfig.ScrapeInterval, c.GlobalConfig.ScrapeTimeout); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if _, ok := jobNames[scfg.JobName]; ok {
|
||||
return nil, fmt.Errorf("found multiple scrape configs with job name %q", scfg.JobName)
|
||||
}
|
||||
jobNames[scfg.JobName] = "main config file"
|
||||
scfgs[i] = scfg
|
||||
}
|
||||
for _, pat := range c.ScrapeConfigFiles {
|
||||
fs, err := filepath.Glob(pat)
|
||||
if err != nil {
|
||||
// The only error can be a bad pattern.
|
||||
return nil, fmt.Errorf("error retrieving scrape config files for %q: %w", pat, err)
|
||||
}
|
||||
for _, filename := range fs {
|
||||
cfg := ScrapeConfigs{}
|
||||
content, err := os.ReadFile(filename)
|
||||
if err != nil {
|
||||
return nil, fileErr(filename, err)
|
||||
}
|
||||
err = yaml.UnmarshalStrict(content, &cfg)
|
||||
if err != nil {
|
||||
return nil, fileErr(filename, err)
|
||||
}
|
||||
for _, scfg := range cfg.ScrapeConfigs {
|
||||
if err := scfg.Validate(c.GlobalConfig.ScrapeInterval, c.GlobalConfig.ScrapeTimeout); err != nil {
|
||||
return nil, fileErr(filename, err)
|
||||
}
|
||||
|
||||
if f, ok := jobNames[scfg.JobName]; ok {
|
||||
return nil, fileErr(filename, fmt.Errorf("found multiple scrape configs with job name %q, first found in %s", scfg.JobName, f))
|
||||
}
|
||||
jobNames[scfg.JobName] = fmt.Sprintf("%q", filePath(filename))
|
||||
|
||||
scfg.SetDirectory(filepath.Dir(filename))
|
||||
scfgs = append(scfgs, scfg)
|
||||
}
|
||||
}
|
||||
}
|
||||
return scfgs, nil
|
||||
}
|
||||
|
||||
// UnmarshalYAML implements the yaml.Unmarshaler interface.
|
||||
func (c *Config) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
*c = DefaultConfig
|
||||
|
@ -276,26 +332,18 @@ func (c *Config) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
|||
return fmt.Errorf("invalid rule file path %q", rf)
|
||||
}
|
||||
}
|
||||
|
||||
for _, sf := range c.ScrapeConfigFiles {
|
||||
if !patRulePath.MatchString(sf) {
|
||||
return fmt.Errorf("invalid scrape config file path %q", sf)
|
||||
}
|
||||
}
|
||||
|
||||
// Do global overrides and validate unique names.
|
||||
jobNames := map[string]struct{}{}
|
||||
for _, scfg := range c.ScrapeConfigs {
|
||||
if scfg == nil {
|
||||
return errors.New("empty or null scrape config section")
|
||||
}
|
||||
// First set the correct scrape interval, then check that the timeout
|
||||
// (inferred or explicit) is not greater than that.
|
||||
if scfg.ScrapeInterval == 0 {
|
||||
scfg.ScrapeInterval = c.GlobalConfig.ScrapeInterval
|
||||
}
|
||||
if scfg.ScrapeTimeout > scfg.ScrapeInterval {
|
||||
return fmt.Errorf("scrape timeout greater than scrape interval for scrape config with job name %q", scfg.JobName)
|
||||
}
|
||||
if scfg.ScrapeTimeout == 0 {
|
||||
if c.GlobalConfig.ScrapeTimeout > scfg.ScrapeInterval {
|
||||
scfg.ScrapeTimeout = scfg.ScrapeInterval
|
||||
} else {
|
||||
scfg.ScrapeTimeout = c.GlobalConfig.ScrapeTimeout
|
||||
}
|
||||
if err := scfg.Validate(c.GlobalConfig.ScrapeInterval, c.GlobalConfig.ScrapeTimeout); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, ok := jobNames[scfg.JobName]; ok {
|
||||
|
@ -401,6 +449,10 @@ func (c *GlobalConfig) isZero() bool {
|
|||
c.QueryLogFile == ""
|
||||
}
|
||||
|
||||
type ScrapeConfigs struct {
|
||||
ScrapeConfigs []*ScrapeConfig `yaml:"scrape_configs,omitempty"`
|
||||
}
|
||||
|
||||
// ScrapeConfig configures a scraping unit for Prometheus.
|
||||
type ScrapeConfig struct {
|
||||
// The job name to which the job label is set by default.
|
||||
|
@ -494,6 +546,28 @@ func (c *ScrapeConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (c *ScrapeConfig) Validate(defaultInterval, defaultTimeout model.Duration) error {
|
||||
if c == nil {
|
||||
return errors.New("empty or null scrape config section")
|
||||
}
|
||||
// First set the correct scrape interval, then check that the timeout
|
||||
// (inferred or explicit) is not greater than that.
|
||||
if c.ScrapeInterval == 0 {
|
||||
c.ScrapeInterval = defaultInterval
|
||||
}
|
||||
if c.ScrapeTimeout > c.ScrapeInterval {
|
||||
return fmt.Errorf("scrape timeout greater than scrape interval for scrape config with job name %q", c.JobName)
|
||||
}
|
||||
if c.ScrapeTimeout == 0 {
|
||||
if defaultTimeout > c.ScrapeInterval {
|
||||
c.ScrapeTimeout = c.ScrapeInterval
|
||||
} else {
|
||||
c.ScrapeTimeout = defaultTimeout
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalYAML implements the yaml.Marshaler interface.
|
||||
func (c *ScrapeConfig) MarshalYAML() (interface{}, error) {
|
||||
return discovery.MarshalYAMLWithInlineConfigs(c)
|
||||
|
@ -936,3 +1010,15 @@ func (c *RemoteReadConfig) UnmarshalYAML(unmarshal func(interface{}) error) erro
|
|||
// Thus we just do its validation here.
|
||||
return c.HTTPClientConfig.Validate()
|
||||
}
|
||||
|
||||
func filePath(filename string) string {
|
||||
absPath, err := filepath.Abs(filename)
|
||||
if err != nil {
|
||||
return filename
|
||||
}
|
||||
return absPath
|
||||
}
|
||||
|
||||
func fileErr(filename string, err error) error {
|
||||
return fmt.Errorf("%q: %w", filePath(filename), err)
|
||||
}
|
||||
|
|
|
@ -1693,6 +1693,10 @@ var expectedErrors = []struct {
|
|||
filename: "ovhcloud_bad_service.bad.yml",
|
||||
errMsg: "unknown service: fakeservice",
|
||||
},
|
||||
{
|
||||
filename: "scrape_config_files_glob.bad.yml",
|
||||
errMsg: `parsing YAML file testdata/scrape_config_files_glob.bad.yml: invalid scrape config file path "scrape_configs/*/*"`,
|
||||
},
|
||||
}
|
||||
|
||||
func TestBadConfigs(t *testing.T) {
|
||||
|
@ -1779,6 +1783,156 @@ func TestEmptyGlobalBlock(t *testing.T) {
|
|||
require.Equal(t, exp, *c)
|
||||
}
|
||||
|
||||
func TestGetScrapeConfigs(t *testing.T) {
|
||||
sc := func(jobName string, scrapeInterval, scrapeTimeout model.Duration) *ScrapeConfig {
|
||||
return &ScrapeConfig{
|
||||
JobName: jobName,
|
||||
HonorTimestamps: true,
|
||||
ScrapeInterval: scrapeInterval,
|
||||
ScrapeTimeout: scrapeTimeout,
|
||||
MetricsPath: "/metrics",
|
||||
Scheme: "http",
|
||||
HTTPClientConfig: config.DefaultHTTPClientConfig,
|
||||
ServiceDiscoveryConfigs: discovery.Configs{
|
||||
discovery.StaticConfig{
|
||||
{
|
||||
Targets: []model.LabelSet{
|
||||
{
|
||||
model.AddressLabel: "localhost:8080",
|
||||
},
|
||||
},
|
||||
Source: "0",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
configFile string
|
||||
expectedResult []*ScrapeConfig
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "An included config file should be a valid global config.",
|
||||
configFile: "testdata/scrape_config_files.good.yml",
|
||||
expectedResult: []*ScrapeConfig{sc("prometheus", model.Duration(60*time.Second), model.Duration(10*time.Second))},
|
||||
},
|
||||
{
|
||||
name: "An global config that only include a scrape config file.",
|
||||
configFile: "testdata/scrape_config_files_only.good.yml",
|
||||
expectedResult: []*ScrapeConfig{sc("prometheus", model.Duration(60*time.Second), model.Duration(10*time.Second))},
|
||||
},
|
||||
{
|
||||
name: "An global config that combine scrape config files and scrape configs.",
|
||||
configFile: "testdata/scrape_config_files_combined.good.yml",
|
||||
expectedResult: []*ScrapeConfig{
|
||||
sc("node", model.Duration(60*time.Second), model.Duration(10*time.Second)),
|
||||
sc("prometheus", model.Duration(60*time.Second), model.Duration(10*time.Second)),
|
||||
sc("alertmanager", model.Duration(60*time.Second), model.Duration(10*time.Second)),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "An global config that includes a scrape config file with globs",
|
||||
configFile: "testdata/scrape_config_files_glob.good.yml",
|
||||
expectedResult: []*ScrapeConfig{
|
||||
{
|
||||
JobName: "prometheus",
|
||||
|
||||
HonorTimestamps: true,
|
||||
ScrapeInterval: model.Duration(60 * time.Second),
|
||||
ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout,
|
||||
|
||||
MetricsPath: DefaultScrapeConfig.MetricsPath,
|
||||
Scheme: DefaultScrapeConfig.Scheme,
|
||||
|
||||
HTTPClientConfig: config.HTTPClientConfig{
|
||||
TLSConfig: config.TLSConfig{
|
||||
CertFile: filepath.FromSlash("testdata/scrape_configs/valid_cert_file"),
|
||||
KeyFile: filepath.FromSlash("testdata/scrape_configs/valid_key_file"),
|
||||
},
|
||||
FollowRedirects: true,
|
||||
EnableHTTP2: true,
|
||||
},
|
||||
|
||||
ServiceDiscoveryConfigs: discovery.Configs{
|
||||
discovery.StaticConfig{
|
||||
{
|
||||
Targets: []model.LabelSet{
|
||||
{model.AddressLabel: "localhost:8080"},
|
||||
},
|
||||
Source: "0",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
JobName: "node",
|
||||
|
||||
HonorTimestamps: true,
|
||||
ScrapeInterval: model.Duration(15 * time.Second),
|
||||
ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout,
|
||||
HTTPClientConfig: config.HTTPClientConfig{
|
||||
TLSConfig: config.TLSConfig{
|
||||
CertFile: filepath.FromSlash("testdata/valid_cert_file"),
|
||||
KeyFile: filepath.FromSlash("testdata/valid_key_file"),
|
||||
},
|
||||
FollowRedirects: true,
|
||||
EnableHTTP2: true,
|
||||
},
|
||||
|
||||
MetricsPath: DefaultScrapeConfig.MetricsPath,
|
||||
Scheme: DefaultScrapeConfig.Scheme,
|
||||
|
||||
ServiceDiscoveryConfigs: discovery.Configs{
|
||||
&vultr.SDConfig{
|
||||
HTTPClientConfig: config.HTTPClientConfig{
|
||||
Authorization: &config.Authorization{
|
||||
Type: "Bearer",
|
||||
Credentials: "abcdef",
|
||||
},
|
||||
FollowRedirects: true,
|
||||
EnableHTTP2: true,
|
||||
},
|
||||
Port: 80,
|
||||
RefreshInterval: model.Duration(60 * time.Second),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "An global config that includes twice the same scrape configs.",
|
||||
configFile: "testdata/scrape_config_files_double_import.bad.yml",
|
||||
expectedError: `found multiple scrape configs with job name "prometheus"`,
|
||||
},
|
||||
{
|
||||
name: "An global config that includes a scrape config identical to a scrape config in the main file.",
|
||||
configFile: "testdata/scrape_config_files_duplicate.bad.yml",
|
||||
expectedError: `found multiple scrape configs with job name "prometheus"`,
|
||||
},
|
||||
{
|
||||
name: "An global config that includes a scrape config file with errors.",
|
||||
configFile: "testdata/scrape_config_files_global.bad.yml",
|
||||
expectedError: `scrape timeout greater than scrape interval for scrape config with job name "prometheus"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
c, err := LoadFile(tc.configFile, false, false, log.NewNopLogger())
|
||||
require.NoError(t, err)
|
||||
|
||||
scfgs, err := c.GetScrapeConfigs()
|
||||
if len(tc.expectedError) > 0 {
|
||||
require.ErrorContains(t, err, tc.expectedError)
|
||||
}
|
||||
require.Equal(t, tc.expectedResult, scfgs)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func kubernetesSDHostURL() config.URL {
|
||||
tURL, _ := url.Parse("https://localhost:1234")
|
||||
return config.URL{URL: tURL}
|
||||
|
|
6
config/testdata/scrape_config_files.bad.yml
vendored
Normal file
6
config/testdata/scrape_config_files.bad.yml
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
scrape_interval: 10s
|
||||
scrape_timeout: 20s
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
4
config/testdata/scrape_config_files.good.yml
vendored
Normal file
4
config/testdata/scrape_config_files.good.yml
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
4
config/testdata/scrape_config_files2.good.yml
vendored
Normal file
4
config/testdata/scrape_config_files2.good.yml
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
scrape_configs:
|
||||
- job_name: alertmanager
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
7
config/testdata/scrape_config_files_combined.good.yml
vendored
Normal file
7
config/testdata/scrape_config_files_combined.good.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
scrape_config_files:
|
||||
- scrape_config_files.good.yml
|
||||
- scrape_config_files2.good.yml
|
||||
scrape_configs:
|
||||
- job_name: node
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
3
config/testdata/scrape_config_files_double_import.bad.yml
vendored
Normal file
3
config/testdata/scrape_config_files_double_import.bad.yml
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
scrape_config_files:
|
||||
- scrape_config_files.good.yml
|
||||
- scrape_config_files.good.yml
|
6
config/testdata/scrape_config_files_duplicate.bad.yml
vendored
Normal file
6
config/testdata/scrape_config_files_duplicate.bad.yml
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
scrape_config_files:
|
||||
- scrape_config_files.good.yml
|
||||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
6
config/testdata/scrape_config_files_glob.bad.yml
vendored
Normal file
6
config/testdata/scrape_config_files_glob.bad.yml
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
scrape_config_files:
|
||||
- scrape_configs/*/*
|
||||
scrape_configs:
|
||||
- job_name: node
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
2
config/testdata/scrape_config_files_glob.good.yml
vendored
Normal file
2
config/testdata/scrape_config_files_glob.good.yml
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
scrape_config_files:
|
||||
- scrape_configs/*.yml
|
2
config/testdata/scrape_config_files_global.bad.yml
vendored
Normal file
2
config/testdata/scrape_config_files_global.bad.yml
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
scrape_config_files:
|
||||
- scrape_config_files.bad.yml
|
11
config/testdata/scrape_config_files_global_duplicate.bad.yml
vendored
Normal file
11
config/testdata/scrape_config_files_global_duplicate.bad.yml
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_config_files:
|
||||
- scrape_config_files.good.yml
|
||||
- scrape_config_files.good.yml
|
||||
|
||||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
2
config/testdata/scrape_config_files_only.good.yml
vendored
Normal file
2
config/testdata/scrape_config_files_only.good.yml
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
scrape_config_files:
|
||||
- scrape_config_files.good.yml
|
7
config/testdata/scrape_configs/scrape_config_files1.good.yml
vendored
Normal file
7
config/testdata/scrape_configs/scrape_config_files1.good.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
static_configs:
|
||||
- targets: ['localhost:8080']
|
||||
tls_config:
|
||||
cert_file: valid_cert_file
|
||||
key_file: valid_key_file
|
9
config/testdata/scrape_configs/scrape_config_files2.good.yml
vendored
Normal file
9
config/testdata/scrape_configs/scrape_config_files2.good.yml
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
scrape_configs:
|
||||
- job_name: node
|
||||
scrape_interval: 15s
|
||||
tls_config:
|
||||
cert_file: ../valid_cert_file
|
||||
key_file: ../valid_key_file
|
||||
vultr_sd_configs:
|
||||
- authorization:
|
||||
credentials: abcdef
|
|
@ -365,7 +365,7 @@ func TestGetDatacenterShouldReturnError(t *testing.T) {
|
|||
{
|
||||
// Define a handler that will return status 500.
|
||||
handler: func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(500)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
},
|
||||
errMessage: "Unexpected response code: 500 ()",
|
||||
},
|
||||
|
|
|
@ -74,16 +74,16 @@ func createTestHTTPServer(t *testing.T, responder discoveryResponder) *httptest.
|
|||
|
||||
discoveryRes, err := responder(discoveryReq)
|
||||
if err != nil {
|
||||
w.WriteHeader(500)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if discoveryRes == nil {
|
||||
w.WriteHeader(304)
|
||||
w.WriteHeader(http.StatusNotModified)
|
||||
return
|
||||
}
|
||||
|
||||
w.WriteHeader(200)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
data, err := protoJSONMarshalOptions.Marshal(discoveryRes)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
|
|
@ -78,6 +78,11 @@ global:
|
|||
rule_files:
|
||||
[ - <filepath_glob> ... ]
|
||||
|
||||
# Scrape config files specifies a list of globs. Scrape configs are read from
|
||||
# all matching files and appended to the list of scrape configs.
|
||||
scrape_config_files:
|
||||
[ - <filepath_glob> ... ]
|
||||
|
||||
# A list of scrape configurations.
|
||||
scrape_configs:
|
||||
[ - <scrape_config> ... ]
|
||||
|
|
|
@ -9,7 +9,7 @@ require (
|
|||
github.com/influxdata/influxdb v1.11.0
|
||||
github.com/prometheus/client_golang v1.14.0
|
||||
github.com/prometheus/common v0.37.0
|
||||
github.com/stretchr/testify v1.8.1
|
||||
github.com/stretchr/testify v1.8.2
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
||||
)
|
||||
|
||||
|
|
|
@ -228,8 +228,8 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
|
|||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
|
||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/vultr/govultr/v2 v2.17.2 h1:gej/rwr91Puc/tgh+j33p/BLR16UrIPnSr+AIwYWZQs=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
|
|
13
go.mod
13
go.mod
|
@ -7,10 +7,9 @@ require (
|
|||
github.com/Azure/go-autorest/autorest v0.11.28
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.22
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137
|
||||
github.com/aws/aws-sdk-go v1.44.187
|
||||
github.com/aws/aws-sdk-go v1.44.207
|
||||
github.com/cespare/xxhash/v2 v2.2.0
|
||||
github.com/dennwc/varint v1.0.0
|
||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245
|
||||
github.com/digitalocean/godo v1.95.0
|
||||
github.com/docker/docker v20.10.23+incompatible
|
||||
github.com/edsrzf/mmap-go v1.1.0
|
||||
|
@ -60,11 +59,11 @@ require (
|
|||
go.opentelemetry.io/otel/trace v1.11.2
|
||||
go.uber.org/atomic v1.10.0
|
||||
go.uber.org/automaxprocs v1.5.1
|
||||
go.uber.org/goleak v1.2.0
|
||||
golang.org/x/net v0.5.0
|
||||
go.uber.org/goleak v1.2.1
|
||||
golang.org/x/net v0.7.0
|
||||
golang.org/x/oauth2 v0.4.0
|
||||
golang.org/x/sync v0.1.0
|
||||
golang.org/x/sys v0.4.0
|
||||
golang.org/x/sys v0.5.0
|
||||
golang.org/x/time v0.3.0
|
||||
golang.org/x/tools v0.5.0
|
||||
google.golang.org/api v0.108.0
|
||||
|
@ -175,8 +174,8 @@ require (
|
|||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20230124195608-d38c7dcee874
|
||||
golang.org/x/mod v0.7.0 // indirect
|
||||
golang.org/x/term v0.4.0 // indirect
|
||||
golang.org/x/text v0.6.0 // indirect
|
||||
golang.org/x/term v0.5.0 // indirect
|
||||
golang.org/x/text v0.7.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/ini.v1 v1.66.6 // indirect
|
||||
|
|
27
go.sum
27
go.sum
|
@ -97,8 +97,8 @@ github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d/go.mod h1:W
|
|||
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
|
||||
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.44.187 h1:D5CsRomPnlwDHJCanL2mtaLIcbhjiWxNh5j8zvaWdJA=
|
||||
github.com/aws/aws-sdk-go v1.44.187/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go v1.44.207 h1:7O0AMKxTm+/GUx6zw+3dqc+fD3tTzv8xaZPYo+ywRwE=
|
||||
github.com/aws/aws-sdk-go v1.44.207/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
|
@ -148,8 +148,6 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
|
|||
github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE=
|
||||
github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245 h1:9cOfvEwjQxdwKuNDTQSaMKNRvwKwgZG+U4HrjeRKHso=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/digitalocean/godo v1.95.0 h1:S48/byPKui7RHZc1wYEPfRvkcEvToADNb5I3guu95xg=
|
||||
github.com/digitalocean/godo v1.95.0/go.mod h1:NRpFznZFvhHjBoqZAaOD3khVzsJ3EibzKqFL4R60dmA=
|
||||
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
|
||||
|
@ -808,8 +806,8 @@ go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
|
|||
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/automaxprocs v1.5.1 h1:e1YG66Lrk73dn4qhg8WFSvhF0JuFQF0ERIp4rpuV8Qk=
|
||||
go.uber.org/automaxprocs v1.5.1/go.mod h1:BF4eumQw0P9GtnuxxovUd06vwm1o18oMzFtK66vU6XU=
|
||||
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
|
||||
go.uber.org/goleak v1.2.0/go.mod h1:XJYK+MuIchqpmGmUSAzotztawfKvYLUIgg7guXrwVUo=
|
||||
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
|
||||
go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
|
@ -857,7 +855,6 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl
|
|||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
|
||||
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
|
@ -919,8 +916,8 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx
|
|||
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
|
||||
golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
|
||||
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -1011,13 +1008,13 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
|
||||
golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
|
||||
golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
|
||||
golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -1027,8 +1024,8 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
|||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
|
||||
golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
|
|
|
@ -11,16 +11,18 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build !stringlabels
|
||||
|
||||
package labels
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
"github.com/prometheus/common/model"
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
// Well-known label names used by Prometheus components.
|
||||
|
@ -358,7 +360,7 @@ func EmptyLabels() Labels {
|
|||
func New(ls ...Label) Labels {
|
||||
set := make(Labels, 0, len(ls))
|
||||
set = append(set, ls...)
|
||||
sort.Sort(set)
|
||||
slices.SortFunc(set, func(a, b Label) bool { return a.Name < b.Name })
|
||||
|
||||
return set
|
||||
}
|
||||
|
@ -382,7 +384,7 @@ func FromStrings(ss ...string) Labels {
|
|||
res = append(res, Label{Name: ss[i], Value: ss[i+1]})
|
||||
}
|
||||
|
||||
sort.Sort(res)
|
||||
slices.SortFunc(res, func(a, b Label) bool { return a.Name < b.Name })
|
||||
return res
|
||||
}
|
||||
|
||||
|
@ -562,7 +564,7 @@ Outer:
|
|||
}
|
||||
if len(b.add) > 0 { // Base is already in order, so we only need to sort if we add to it.
|
||||
res = append(res, b.add...)
|
||||
sort.Sort(res)
|
||||
slices.SortFunc(res, func(a, b Label) bool { return a.Name < b.Name })
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
@ -589,7 +591,7 @@ func (b *ScratchBuilder) Add(name, value string) {
|
|||
|
||||
// Sort the labels added so far by name.
|
||||
func (b *ScratchBuilder) Sort() {
|
||||
sort.Sort(b.add)
|
||||
slices.SortFunc(b.add, func(a, b Label) bool { return a.Name < b.Name })
|
||||
}
|
||||
|
||||
// Asssign is for when you already have a Labels which you want this ScratchBuilder to return.
|
||||
|
|
788
model/labels/labels_string.go
Normal file
788
model/labels/labels_string.go
Normal file
|
@ -0,0 +1,788 @@
|
|||
// Copyright 2017 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build stringlabels
|
||||
|
||||
package labels
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
"github.com/prometheus/common/model"
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
// Well-known label names used by Prometheus components.
|
||||
const (
|
||||
MetricName = "__name__"
|
||||
AlertName = "alertname"
|
||||
BucketLabel = "le"
|
||||
InstanceName = "instance"
|
||||
)
|
||||
|
||||
var seps = []byte{'\xff'}
|
||||
|
||||
// Label is a key/value pair of strings.
|
||||
type Label struct {
|
||||
Name, Value string
|
||||
}
|
||||
|
||||
// Labels is implemented by a single flat string holding name/value pairs.
|
||||
// Each name and value is preceded by its length in varint encoding.
|
||||
// Names are in order.
|
||||
type Labels struct {
|
||||
data string
|
||||
}
|
||||
|
||||
type labelSlice []Label
|
||||
|
||||
func (ls labelSlice) Len() int { return len(ls) }
|
||||
func (ls labelSlice) Swap(i, j int) { ls[i], ls[j] = ls[j], ls[i] }
|
||||
func (ls labelSlice) Less(i, j int) bool { return ls[i].Name < ls[j].Name }
|
||||
|
||||
func decodeSize(data string, index int) (int, int) {
|
||||
var size int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
// Just panic if we go of the end of data, since all Labels strings are constructed internally and
|
||||
// malformed data indicates a bug, or memory corruption.
|
||||
b := data[index]
|
||||
index++
|
||||
size |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return size, index
|
||||
}
|
||||
|
||||
func decodeString(data string, index int) (string, int) {
|
||||
var size int
|
||||
size, index = decodeSize(data, index)
|
||||
return data[index : index+size], index + size
|
||||
}
|
||||
|
||||
func (ls Labels) String() string {
|
||||
var b bytes.Buffer
|
||||
|
||||
b.WriteByte('{')
|
||||
for i := 0; i < len(ls.data); {
|
||||
if i > 0 {
|
||||
b.WriteByte(',')
|
||||
b.WriteByte(' ')
|
||||
}
|
||||
var name, value string
|
||||
name, i = decodeString(ls.data, i)
|
||||
value, i = decodeString(ls.data, i)
|
||||
b.WriteString(name)
|
||||
b.WriteByte('=')
|
||||
b.WriteString(strconv.Quote(value))
|
||||
}
|
||||
b.WriteByte('}')
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// Bytes returns ls as a byte slice.
|
||||
// It uses non-printing characters and so should not be used for printing.
|
||||
func (ls Labels) Bytes(buf []byte) []byte {
|
||||
if cap(buf) < len(ls.data) {
|
||||
buf = make([]byte, len(ls.data))
|
||||
} else {
|
||||
buf = buf[:len(ls.data)]
|
||||
}
|
||||
copy(buf, ls.data)
|
||||
return buf
|
||||
}
|
||||
|
||||
// MarshalJSON implements json.Marshaler.
|
||||
func (ls Labels) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(ls.Map())
|
||||
}
|
||||
|
||||
// UnmarshalJSON implements json.Unmarshaler.
|
||||
func (ls *Labels) UnmarshalJSON(b []byte) error {
|
||||
var m map[string]string
|
||||
|
||||
if err := json.Unmarshal(b, &m); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
*ls = FromMap(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalYAML implements yaml.Marshaler.
|
||||
func (ls Labels) MarshalYAML() (interface{}, error) {
|
||||
return ls.Map(), nil
|
||||
}
|
||||
|
||||
// IsZero implements yaml.IsZeroer - if we don't have this then 'omitempty' fields are always omitted.
|
||||
func (ls Labels) IsZero() bool {
|
||||
return len(ls.data) == 0
|
||||
}
|
||||
|
||||
// UnmarshalYAML implements yaml.Unmarshaler.
|
||||
func (ls *Labels) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
var m map[string]string
|
||||
|
||||
if err := unmarshal(&m); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
*ls = FromMap(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
// MatchLabels returns a subset of Labels that matches/does not match with the provided label names based on the 'on' boolean.
|
||||
// If on is set to true, it returns the subset of labels that match with the provided label names and its inverse when 'on' is set to false.
|
||||
// TODO: This is only used in printing an error message
|
||||
func (ls Labels) MatchLabels(on bool, names ...string) Labels {
|
||||
b := NewBuilder(ls)
|
||||
if on {
|
||||
b.Keep(names...)
|
||||
} else {
|
||||
b.Del(MetricName)
|
||||
b.Del(names...)
|
||||
}
|
||||
return b.Labels(EmptyLabels())
|
||||
}
|
||||
|
||||
// Hash returns a hash value for the label set.
|
||||
// Note: the result is not guaranteed to be consistent across different runs of Prometheus.
|
||||
func (ls Labels) Hash() uint64 {
|
||||
return xxhash.Sum64(yoloBytes(ls.data))
|
||||
}
|
||||
|
||||
// HashForLabels returns a hash value for the labels matching the provided names.
|
||||
// 'names' have to be sorted in ascending order.
|
||||
func (ls Labels) HashForLabels(b []byte, names ...string) (uint64, []byte) {
|
||||
b = b[:0]
|
||||
j := 0
|
||||
for i := 0; i < len(ls.data); {
|
||||
var name, value string
|
||||
name, i = decodeString(ls.data, i)
|
||||
value, i = decodeString(ls.data, i)
|
||||
for j < len(names) && names[j] < name {
|
||||
j++
|
||||
}
|
||||
if j == len(names) {
|
||||
break
|
||||
}
|
||||
if name == names[j] {
|
||||
b = append(b, name...)
|
||||
b = append(b, seps[0])
|
||||
b = append(b, value...)
|
||||
b = append(b, seps[0])
|
||||
}
|
||||
}
|
||||
|
||||
return xxhash.Sum64(b), b
|
||||
}
|
||||
|
||||
// HashWithoutLabels returns a hash value for all labels except those matching
|
||||
// the provided names.
|
||||
// 'names' have to be sorted in ascending order.
|
||||
func (ls Labels) HashWithoutLabels(b []byte, names ...string) (uint64, []byte) {
|
||||
b = b[:0]
|
||||
j := 0
|
||||
for i := 0; i < len(ls.data); {
|
||||
var name, value string
|
||||
name, i = decodeString(ls.data, i)
|
||||
value, i = decodeString(ls.data, i)
|
||||
for j < len(names) && names[j] < name {
|
||||
j++
|
||||
}
|
||||
if name == MetricName || (j < len(names) && name == names[j]) {
|
||||
continue
|
||||
}
|
||||
b = append(b, name...)
|
||||
b = append(b, seps[0])
|
||||
b = append(b, value...)
|
||||
b = append(b, seps[0])
|
||||
}
|
||||
return xxhash.Sum64(b), b
|
||||
}
|
||||
|
||||
// BytesWithLabels is just as Bytes(), but only for labels matching names.
|
||||
// 'names' have to be sorted in ascending order.
|
||||
func (ls Labels) BytesWithLabels(buf []byte, names ...string) []byte {
|
||||
b := buf[:0]
|
||||
j := 0
|
||||
for pos := 0; pos < len(ls.data); {
|
||||
lName, newPos := decodeString(ls.data, pos)
|
||||
_, newPos = decodeString(ls.data, newPos)
|
||||
for j < len(names) && names[j] < lName {
|
||||
j++
|
||||
}
|
||||
if j == len(names) {
|
||||
break
|
||||
}
|
||||
if lName == names[j] {
|
||||
b = append(b, ls.data[pos:newPos]...)
|
||||
}
|
||||
pos = newPos
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// BytesWithoutLabels is just as Bytes(), but only for labels not matching names.
|
||||
// 'names' have to be sorted in ascending order.
|
||||
func (ls Labels) BytesWithoutLabels(buf []byte, names ...string) []byte {
|
||||
b := buf[:0]
|
||||
j := 0
|
||||
for pos := 0; pos < len(ls.data); {
|
||||
lName, newPos := decodeString(ls.data, pos)
|
||||
_, newPos = decodeString(ls.data, newPos)
|
||||
for j < len(names) && names[j] < lName {
|
||||
j++
|
||||
}
|
||||
if j == len(names) || lName != names[j] {
|
||||
b = append(b, ls.data[pos:newPos]...)
|
||||
}
|
||||
pos = newPos
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// Copy returns a copy of the labels.
|
||||
func (ls Labels) Copy() Labels {
|
||||
buf := append([]byte{}, ls.data...)
|
||||
return Labels{data: yoloString(buf)}
|
||||
}
|
||||
|
||||
// Get returns the value for the label with the given name.
|
||||
// Returns an empty string if the label doesn't exist.
|
||||
func (ls Labels) Get(name string) string {
|
||||
for i := 0; i < len(ls.data); {
|
||||
var lName, lValue string
|
||||
lName, i = decodeString(ls.data, i)
|
||||
lValue, i = decodeString(ls.data, i)
|
||||
if lName == name {
|
||||
return lValue
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Has returns true if the label with the given name is present.
|
||||
func (ls Labels) Has(name string) bool {
|
||||
for i := 0; i < len(ls.data); {
|
||||
var lName string
|
||||
lName, i = decodeString(ls.data, i)
|
||||
_, i = decodeString(ls.data, i)
|
||||
if lName == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasDuplicateLabelNames returns whether ls has duplicate label names.
|
||||
// It assumes that the labelset is sorted.
|
||||
func (ls Labels) HasDuplicateLabelNames() (string, bool) {
|
||||
var lName, prevName string
|
||||
for i := 0; i < len(ls.data); {
|
||||
lName, i = decodeString(ls.data, i)
|
||||
_, i = decodeString(ls.data, i)
|
||||
if lName == prevName {
|
||||
return lName, true
|
||||
}
|
||||
prevName = lName
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
// WithoutEmpty returns the labelset without empty labels.
|
||||
// May return the same labelset.
|
||||
func (ls Labels) WithoutEmpty() Labels {
|
||||
for pos := 0; pos < len(ls.data); {
|
||||
_, newPos := decodeString(ls.data, pos)
|
||||
lValue, newPos := decodeString(ls.data, newPos)
|
||||
if lValue != "" {
|
||||
pos = newPos
|
||||
continue
|
||||
}
|
||||
// Do not copy the slice until it's necessary.
|
||||
// TODO: could optimise the case where all blanks are at the end.
|
||||
// Note: we size the new buffer on the assumption there is exactly one blank value.
|
||||
buf := make([]byte, pos, pos+(len(ls.data)-newPos))
|
||||
copy(buf, ls.data[:pos]) // copy the initial non-blank labels
|
||||
pos = newPos // move past the first blank value
|
||||
for pos < len(ls.data) {
|
||||
var newPos int
|
||||
_, newPos = decodeString(ls.data, pos)
|
||||
lValue, newPos = decodeString(ls.data, newPos)
|
||||
if lValue != "" {
|
||||
buf = append(buf, ls.data[pos:newPos]...)
|
||||
}
|
||||
pos = newPos
|
||||
}
|
||||
return Labels{data: yoloString(buf)}
|
||||
}
|
||||
return ls
|
||||
}
|
||||
|
||||
// IsValid checks if the metric name or label names are valid.
|
||||
func (ls Labels) IsValid() bool {
|
||||
err := ls.Validate(func(l Label) error {
|
||||
if l.Name == model.MetricNameLabel && !model.IsValidMetricName(model.LabelValue(l.Value)) {
|
||||
return strconv.ErrSyntax
|
||||
}
|
||||
if !model.LabelName(l.Name).IsValid() || !model.LabelValue(l.Value).IsValid() {
|
||||
return strconv.ErrSyntax
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// Equal returns whether the two label sets are equal.
|
||||
func Equal(ls, o Labels) bool {
|
||||
return ls.data == o.data
|
||||
}
|
||||
|
||||
// Map returns a string map of the labels.
|
||||
func (ls Labels) Map() map[string]string {
|
||||
m := make(map[string]string, len(ls.data)/10)
|
||||
for i := 0; i < len(ls.data); {
|
||||
var lName, lValue string
|
||||
lName, i = decodeString(ls.data, i)
|
||||
lValue, i = decodeString(ls.data, i)
|
||||
m[lName] = lValue
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// EmptyLabels returns an empty Labels value, for convenience.
|
||||
func EmptyLabels() Labels {
|
||||
return Labels{}
|
||||
}
|
||||
|
||||
func yoloString(b []byte) string {
|
||||
return *((*string)(unsafe.Pointer(&b)))
|
||||
}
|
||||
|
||||
func yoloBytes(s string) (b []byte) {
|
||||
*(*string)(unsafe.Pointer(&b)) = s
|
||||
(*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s)
|
||||
return
|
||||
}
|
||||
|
||||
// New returns a sorted Labels from the given labels.
|
||||
// The caller has to guarantee that all label names are unique.
|
||||
func New(ls ...Label) Labels {
|
||||
slices.SortFunc(ls, func(a, b Label) bool { return a.Name < b.Name })
|
||||
size := labelsSize(ls)
|
||||
buf := make([]byte, size)
|
||||
marshalLabelsToSizedBuffer(ls, buf)
|
||||
return Labels{data: yoloString(buf)}
|
||||
}
|
||||
|
||||
// FromMap returns new sorted Labels from the given map.
|
||||
func FromMap(m map[string]string) Labels {
|
||||
l := make([]Label, 0, len(m))
|
||||
for k, v := range m {
|
||||
l = append(l, Label{Name: k, Value: v})
|
||||
}
|
||||
return New(l...)
|
||||
}
|
||||
|
||||
// FromStrings creates new labels from pairs of strings.
|
||||
func FromStrings(ss ...string) Labels {
|
||||
if len(ss)%2 != 0 {
|
||||
panic("invalid number of strings")
|
||||
}
|
||||
ls := make([]Label, 0, len(ss)/2)
|
||||
for i := 0; i < len(ss); i += 2 {
|
||||
ls = append(ls, Label{Name: ss[i], Value: ss[i+1]})
|
||||
}
|
||||
|
||||
slices.SortFunc(ls, func(a, b Label) bool { return a.Name < b.Name })
|
||||
return New(ls...)
|
||||
}
|
||||
|
||||
// Compare compares the two label sets.
|
||||
// The result will be 0 if a==b, <0 if a < b, and >0 if a > b.
|
||||
// TODO: replace with Less function - Compare is never needed.
|
||||
// TODO: just compare the underlying strings when we don't need alphanumeric sorting.
|
||||
func Compare(a, b Labels) int {
|
||||
l := len(a.data)
|
||||
if len(b.data) < l {
|
||||
l = len(b.data)
|
||||
}
|
||||
|
||||
ia, ib := 0, 0
|
||||
for ia < l {
|
||||
var aName, bName string
|
||||
aName, ia = decodeString(a.data, ia)
|
||||
bName, ib = decodeString(b.data, ib)
|
||||
if aName != bName {
|
||||
if aName < bName {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
}
|
||||
var aValue, bValue string
|
||||
aValue, ia = decodeString(a.data, ia)
|
||||
bValue, ib = decodeString(b.data, ib)
|
||||
if aValue != bValue {
|
||||
if aValue < bValue {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
}
|
||||
}
|
||||
// If all labels so far were in common, the set with fewer labels comes first.
|
||||
return len(a.data) - len(b.data)
|
||||
}
|
||||
|
||||
// Copy labels from b on top of whatever was in ls previously, reusing memory or expanding if needed.
|
||||
func (ls *Labels) CopyFrom(b Labels) {
|
||||
ls.data = b.data // strings are immutable
|
||||
}
|
||||
|
||||
// IsEmpty returns true if ls represents an empty set of labels.
|
||||
func (ls Labels) IsEmpty() bool {
|
||||
return len(ls.data) == 0
|
||||
}
|
||||
|
||||
// Len returns the number of labels; it is relatively slow.
|
||||
func (ls Labels) Len() int {
|
||||
count := 0
|
||||
for i := 0; i < len(ls.data); {
|
||||
var size int
|
||||
size, i = decodeSize(ls.data, i)
|
||||
i += size
|
||||
size, i = decodeSize(ls.data, i)
|
||||
i += size
|
||||
count++
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
// Range calls f on each label.
|
||||
func (ls Labels) Range(f func(l Label)) {
|
||||
for i := 0; i < len(ls.data); {
|
||||
var lName, lValue string
|
||||
lName, i = decodeString(ls.data, i)
|
||||
lValue, i = decodeString(ls.data, i)
|
||||
f(Label{Name: lName, Value: lValue})
|
||||
}
|
||||
}
|
||||
|
||||
// Validate calls f on each label. If f returns a non-nil error, then it returns that error cancelling the iteration.
|
||||
func (ls Labels) Validate(f func(l Label) error) error {
|
||||
for i := 0; i < len(ls.data); {
|
||||
var lName, lValue string
|
||||
lName, i = decodeString(ls.data, i)
|
||||
lValue, i = decodeString(ls.data, i)
|
||||
err := f(Label{Name: lName, Value: lValue})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// InternStrings calls intern on every string value inside ls, replacing them with what it returns.
|
||||
func (ls *Labels) InternStrings(intern func(string) string) {
|
||||
ls.data = intern(ls.data)
|
||||
}
|
||||
|
||||
// ReleaseStrings calls release on every string value inside ls.
|
||||
func (ls Labels) ReleaseStrings(release func(string)) {
|
||||
release(ls.data)
|
||||
}
|
||||
|
||||
// Builder allows modifying Labels.
|
||||
type Builder struct {
|
||||
base Labels
|
||||
del []string
|
||||
add []Label
|
||||
}
|
||||
|
||||
// NewBuilder returns a new LabelsBuilder.
|
||||
func NewBuilder(base Labels) *Builder {
|
||||
b := &Builder{
|
||||
del: make([]string, 0, 5),
|
||||
add: make([]Label, 0, 5),
|
||||
}
|
||||
b.Reset(base)
|
||||
return b
|
||||
}
|
||||
|
||||
// Reset clears all current state for the builder.
|
||||
func (b *Builder) Reset(base Labels) {
|
||||
b.base = base
|
||||
b.del = b.del[:0]
|
||||
b.add = b.add[:0]
|
||||
for i := 0; i < len(base.data); {
|
||||
var lName, lValue string
|
||||
lName, i = decodeString(base.data, i)
|
||||
lValue, i = decodeString(base.data, i)
|
||||
if lValue == "" {
|
||||
b.del = append(b.del, lName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Del deletes the label of the given name.
|
||||
func (b *Builder) Del(ns ...string) *Builder {
|
||||
for _, n := range ns {
|
||||
for i, a := range b.add {
|
||||
if a.Name == n {
|
||||
b.add = append(b.add[:i], b.add[i+1:]...)
|
||||
}
|
||||
}
|
||||
b.del = append(b.del, n)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// Keep removes all labels from the base except those with the given names.
|
||||
func (b *Builder) Keep(ns ...string) *Builder {
|
||||
Outer:
|
||||
for i := 0; i < len(b.base.data); {
|
||||
var lName string
|
||||
lName, i = decodeString(b.base.data, i)
|
||||
_, i = decodeString(b.base.data, i)
|
||||
for _, n := range ns {
|
||||
if lName == n {
|
||||
continue Outer
|
||||
}
|
||||
}
|
||||
b.del = append(b.del, lName)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// Set the name/value pair as a label. A value of "" means delete that label.
|
||||
func (b *Builder) Set(n, v string) *Builder {
|
||||
if v == "" {
|
||||
// Empty labels are the same as missing labels.
|
||||
return b.Del(n)
|
||||
}
|
||||
for i, a := range b.add {
|
||||
if a.Name == n {
|
||||
b.add[i].Value = v
|
||||
return b
|
||||
}
|
||||
}
|
||||
b.add = append(b.add, Label{Name: n, Value: v})
|
||||
|
||||
return b
|
||||
}
|
||||
|
||||
// Labels returns the labels from the builder, adding them to res if non-nil.
|
||||
// Argument res can be the same as b.base, if caller wants to overwrite that slice.
|
||||
// If no modifications were made, the original labels are returned.
|
||||
func (b *Builder) Labels(res Labels) Labels {
|
||||
if len(b.del) == 0 && len(b.add) == 0 {
|
||||
return b.base
|
||||
}
|
||||
|
||||
slices.SortFunc(b.add, func(a, b Label) bool { return a.Name < b.Name })
|
||||
slices.Sort(b.del)
|
||||
a, d := 0, 0
|
||||
|
||||
bufSize := len(b.base.data) + labelsSize(b.add)
|
||||
buf := make([]byte, 0, bufSize) // TODO: see if we can re-use the buffer from res.
|
||||
for pos := 0; pos < len(b.base.data); {
|
||||
oldPos := pos
|
||||
var lName string
|
||||
lName, pos = decodeString(b.base.data, pos)
|
||||
_, pos = decodeString(b.base.data, pos)
|
||||
for d < len(b.del) && b.del[d] < lName {
|
||||
d++
|
||||
}
|
||||
if d < len(b.del) && b.del[d] == lName {
|
||||
continue // This label has been deleted.
|
||||
}
|
||||
for ; a < len(b.add) && b.add[a].Name < lName; a++ {
|
||||
buf = appendLabelTo(buf, &b.add[a]) // Insert label that was not in the base set.
|
||||
}
|
||||
if a < len(b.add) && b.add[a].Name == lName {
|
||||
buf = appendLabelTo(buf, &b.add[a])
|
||||
a++
|
||||
continue // This label has been replaced.
|
||||
}
|
||||
buf = append(buf, b.base.data[oldPos:pos]...)
|
||||
}
|
||||
// We have come to the end of the base set; add any remaining labels.
|
||||
for ; a < len(b.add); a++ {
|
||||
buf = appendLabelTo(buf, &b.add[a])
|
||||
}
|
||||
return Labels{data: yoloString(buf)}
|
||||
}
|
||||
|
||||
func marshalLabelsToSizedBuffer(lbls []Label, data []byte) int {
|
||||
i := len(data)
|
||||
for index := len(lbls) - 1; index >= 0; index-- {
|
||||
size := marshalLabelToSizedBuffer(&lbls[index], data[:i])
|
||||
i -= size
|
||||
}
|
||||
return len(data) - i
|
||||
}
|
||||
|
||||
func marshalLabelToSizedBuffer(m *Label, data []byte) int {
|
||||
i := len(data)
|
||||
i -= len(m.Value)
|
||||
copy(data[i:], m.Value)
|
||||
i = encodeSize(data, i, len(m.Value))
|
||||
i -= len(m.Name)
|
||||
copy(data[i:], m.Name)
|
||||
i = encodeSize(data, i, len(m.Name))
|
||||
return len(data) - i
|
||||
}
|
||||
|
||||
func sizeVarint(x uint64) (n int) {
|
||||
// Most common case first
|
||||
if x < 1<<7 {
|
||||
return 1
|
||||
}
|
||||
if x >= 1<<56 {
|
||||
return 9
|
||||
}
|
||||
if x >= 1<<28 {
|
||||
x >>= 28
|
||||
n = 4
|
||||
}
|
||||
if x >= 1<<14 {
|
||||
x >>= 14
|
||||
n += 2
|
||||
}
|
||||
if x >= 1<<7 {
|
||||
n++
|
||||
}
|
||||
return n + 1
|
||||
}
|
||||
|
||||
func encodeVarint(data []byte, offset int, v uint64) int {
|
||||
offset -= sizeVarint(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
data[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
data[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
|
||||
// Special code for the common case that a size is less than 128
|
||||
func encodeSize(data []byte, offset, v int) int {
|
||||
if v < 1<<7 {
|
||||
offset--
|
||||
data[offset] = uint8(v)
|
||||
return offset
|
||||
}
|
||||
return encodeVarint(data, offset, uint64(v))
|
||||
}
|
||||
|
||||
func labelsSize(lbls []Label) (n int) {
|
||||
// we just encode name/value/name/value, without any extra tags or length bytes
|
||||
for _, e := range lbls {
|
||||
n += labelSize(&e)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func labelSize(m *Label) (n int) {
|
||||
// strings are encoded as length followed by contents.
|
||||
l := len(m.Name)
|
||||
n += l + sizeVarint(uint64(l))
|
||||
l = len(m.Value)
|
||||
n += l + sizeVarint(uint64(l))
|
||||
return n
|
||||
}
|
||||
|
||||
func appendLabelTo(buf []byte, m *Label) []byte {
|
||||
size := labelSize(m)
|
||||
sizeRequired := len(buf) + size
|
||||
if cap(buf) >= sizeRequired {
|
||||
buf = buf[:sizeRequired]
|
||||
} else {
|
||||
bufSize := cap(buf)
|
||||
// Double size of buffer each time it needs to grow, to amortise copying cost.
|
||||
for bufSize < sizeRequired {
|
||||
bufSize = bufSize*2 + 1
|
||||
}
|
||||
newBuf := make([]byte, sizeRequired, bufSize)
|
||||
copy(newBuf, buf)
|
||||
buf = newBuf
|
||||
}
|
||||
marshalLabelToSizedBuffer(m, buf)
|
||||
return buf
|
||||
}
|
||||
|
||||
// ScratchBuilder allows efficient construction of a Labels from scratch.
|
||||
type ScratchBuilder struct {
|
||||
add []Label
|
||||
output Labels
|
||||
overwriteBuffer []byte
|
||||
}
|
||||
|
||||
// NewScratchBuilder creates a ScratchBuilder initialized for Labels with n entries.
|
||||
func NewScratchBuilder(n int) ScratchBuilder {
|
||||
return ScratchBuilder{add: make([]Label, 0, n)}
|
||||
}
|
||||
|
||||
func (b *ScratchBuilder) Reset() {
|
||||
b.add = b.add[:0]
|
||||
b.output = EmptyLabels()
|
||||
}
|
||||
|
||||
// Add a name/value pair.
|
||||
// Note if you Add the same name twice you will get a duplicate label, which is invalid.
|
||||
func (b *ScratchBuilder) Add(name, value string) {
|
||||
b.add = append(b.add, Label{Name: name, Value: value})
|
||||
}
|
||||
|
||||
// Sort the labels added so far by name.
|
||||
func (b *ScratchBuilder) Sort() {
|
||||
slices.SortFunc(b.add, func(a, b Label) bool { return a.Name < b.Name })
|
||||
}
|
||||
|
||||
// Asssign is for when you already have a Labels which you want this ScratchBuilder to return.
|
||||
func (b *ScratchBuilder) Assign(l Labels) {
|
||||
b.output = l
|
||||
}
|
||||
|
||||
// Labels returns the name/value pairs added as a Labels object. Calling Add() after Labels() has no effect.
|
||||
// Note: if you want them sorted, call Sort() first.
|
||||
func (b *ScratchBuilder) Labels() Labels {
|
||||
if b.output.IsEmpty() {
|
||||
size := labelsSize(b.add)
|
||||
buf := make([]byte, size)
|
||||
marshalLabelsToSizedBuffer(b.add, buf)
|
||||
b.output = Labels{data: yoloString(buf)}
|
||||
}
|
||||
return b.output
|
||||
}
|
||||
|
||||
// Write the newly-built Labels out to ls, reusing an internal buffer.
|
||||
// Callers must ensure that there are no other references to ls.
|
||||
func (b *ScratchBuilder) Overwrite(ls *Labels) {
|
||||
size := labelsSize(b.add)
|
||||
if size <= cap(b.overwriteBuffer) {
|
||||
b.overwriteBuffer = b.overwriteBuffer[:size]
|
||||
} else {
|
||||
b.overwriteBuffer = make([]byte, size)
|
||||
}
|
||||
marshalLabelsToSizedBuffer(b.add, b.overwriteBuffer)
|
||||
ls.data = yoloString(b.overwriteBuffer)
|
||||
}
|
|
@ -696,6 +696,50 @@ func BenchmarkLabels_Hash(b *testing.B) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkBuilder(b *testing.B) {
|
||||
m := []Label{
|
||||
{"job", "node"},
|
||||
{"instance", "123.123.1.211:9090"},
|
||||
{"path", "/api/v1/namespaces/<namespace>/deployments/<name>"},
|
||||
{"method", "GET"},
|
||||
{"namespace", "system"},
|
||||
{"status", "500"},
|
||||
{"prometheus", "prometheus-core-1"},
|
||||
{"datacenter", "eu-west-1"},
|
||||
{"pod_name", "abcdef-99999-defee"},
|
||||
}
|
||||
|
||||
var l Labels
|
||||
builder := NewBuilder(EmptyLabels())
|
||||
for i := 0; i < b.N; i++ {
|
||||
builder.Reset(EmptyLabels())
|
||||
for _, l := range m {
|
||||
builder.Set(l.Name, l.Value)
|
||||
}
|
||||
l = builder.Labels(EmptyLabels())
|
||||
}
|
||||
require.Equal(b, 9, l.Len())
|
||||
}
|
||||
|
||||
func BenchmarkLabels_Copy(b *testing.B) {
|
||||
m := map[string]string{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
"prometheus": "prometheus-core-1",
|
||||
"datacenter": "eu-west-1",
|
||||
"pod_name": "abcdef-99999-defee",
|
||||
}
|
||||
l := FromMap(m)
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
l = l.Copy()
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarshaling(t *testing.T) {
|
||||
lbls := FromStrings("aaa", "111", "bbb", "2222", "ccc", "33333")
|
||||
expectedJSON := "{\"aaa\":\"111\",\"bbb\":\"2222\",\"ccc\":\"33333\"}"
|
||||
|
|
|
@ -15,6 +15,7 @@ package relabel
|
|||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
|
@ -268,7 +269,9 @@ func relabel(lset labels.Labels, cfg *Config, lb *labels.Builder) (ret labels.La
|
|||
case Uppercase:
|
||||
lb.Set(cfg.TargetLabel, strings.ToUpper(val))
|
||||
case HashMod:
|
||||
mod := sum64(md5.Sum([]byte(val))) % cfg.Modulus
|
||||
hash := md5.Sum([]byte(val))
|
||||
// Use only the last 8 bytes of the hash to give the same result as earlier versions of this code.
|
||||
mod := binary.BigEndian.Uint64(hash[8:]) % cfg.Modulus
|
||||
lb.Set(cfg.TargetLabel, fmt.Sprintf("%d", mod))
|
||||
case LabelMap:
|
||||
lset.Range(func(l labels.Label) {
|
||||
|
@ -295,15 +298,3 @@ func relabel(lset labels.Labels, cfg *Config, lb *labels.Builder) (ret labels.La
|
|||
|
||||
return lb.Labels(lset), true
|
||||
}
|
||||
|
||||
// sum64 sums the md5 hash to an uint64.
|
||||
func sum64(hash [md5.Size]byte) uint64 {
|
||||
var s uint64
|
||||
|
||||
for i, b := range hash {
|
||||
shift := uint64((md5.Size - i - 1) * 8)
|
||||
|
||||
s |= uint64(b) << shift
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
package textparse
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -31,8 +30,6 @@ import (
|
|||
"github.com/prometheus/prometheus/model/value"
|
||||
)
|
||||
|
||||
var allowedSuffixes = [][]byte{[]byte("_total"), []byte("_bucket")}
|
||||
|
||||
type openMetricsLexer struct {
|
||||
b []byte
|
||||
i int
|
||||
|
@ -46,13 +43,6 @@ func (l *openMetricsLexer) buf() []byte {
|
|||
return l.b[l.start:l.i]
|
||||
}
|
||||
|
||||
func (l *openMetricsLexer) cur() byte {
|
||||
if l.i < len(l.b) {
|
||||
return l.b[l.i]
|
||||
}
|
||||
return byte(' ')
|
||||
}
|
||||
|
||||
// next advances the openMetricsLexer to the next character.
|
||||
func (l *openMetricsLexer) next() byte {
|
||||
l.i++
|
||||
|
@ -223,6 +213,14 @@ func (p *OpenMetricsParser) nextToken() token {
|
|||
return tok
|
||||
}
|
||||
|
||||
func (p *OpenMetricsParser) parseError(exp string, got token) error {
|
||||
e := p.l.i + 1
|
||||
if len(p.l.b) < e {
|
||||
e = len(p.l.b)
|
||||
}
|
||||
return fmt.Errorf("%s, got %q (%q) while parsing: %q", exp, p.l.b[p.l.start:e], got, p.l.b[p.start:e])
|
||||
}
|
||||
|
||||
// Next advances the parser to the next sample. It returns false if no
|
||||
// more samples were read or an error occurred.
|
||||
func (p *OpenMetricsParser) Next() (Entry, error) {
|
||||
|
@ -248,7 +246,7 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
|
|||
case tMName:
|
||||
p.offsets = append(p.offsets, p.l.start, p.l.i)
|
||||
default:
|
||||
return EntryInvalid, parseError("expected metric name after "+t.String(), t2)
|
||||
return EntryInvalid, p.parseError("expected metric name after "+t.String(), t2)
|
||||
}
|
||||
switch t2 := p.nextToken(); t2 {
|
||||
case tText:
|
||||
|
@ -284,7 +282,7 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
|
|||
}
|
||||
case tHelp:
|
||||
if !utf8.Valid(p.text) {
|
||||
return EntryInvalid, errors.New("help text is not a valid utf8 string")
|
||||
return EntryInvalid, fmt.Errorf("help text %q is not a valid utf8 string", p.text)
|
||||
}
|
||||
}
|
||||
switch t {
|
||||
|
@ -297,7 +295,7 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
|
|||
u := yoloString(p.text)
|
||||
if len(u) > 0 {
|
||||
if !strings.HasSuffix(m, u) || len(m) < len(u)+1 || p.l.b[p.offsets[1]-len(u)-1] != '_' {
|
||||
return EntryInvalid, fmt.Errorf("unit not a suffix of metric %q", m)
|
||||
return EntryInvalid, fmt.Errorf("unit %q not a suffix of metric %q", u, m)
|
||||
}
|
||||
}
|
||||
return EntryUnit, nil
|
||||
|
@ -336,10 +334,10 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
|
|||
var ts float64
|
||||
// A float is enough to hold what we need for millisecond resolution.
|
||||
if ts, err = parseFloat(yoloString(p.l.buf()[1:])); err != nil {
|
||||
return EntryInvalid, err
|
||||
return EntryInvalid, fmt.Errorf("%v while parsing: %q", err, p.l.b[p.start:p.l.i])
|
||||
}
|
||||
if math.IsNaN(ts) || math.IsInf(ts, 0) {
|
||||
return EntryInvalid, errors.New("invalid timestamp")
|
||||
return EntryInvalid, fmt.Errorf("invalid timestamp %f", ts)
|
||||
}
|
||||
p.ts = int64(ts * 1000)
|
||||
switch t3 := p.nextToken(); t3 {
|
||||
|
@ -349,26 +347,20 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
|
|||
return EntryInvalid, err
|
||||
}
|
||||
default:
|
||||
return EntryInvalid, parseError("expected next entry after timestamp", t3)
|
||||
return EntryInvalid, p.parseError("expected next entry after timestamp", t3)
|
||||
}
|
||||
default:
|
||||
return EntryInvalid, parseError("expected timestamp or # symbol", t2)
|
||||
return EntryInvalid, p.parseError("expected timestamp or # symbol", t2)
|
||||
}
|
||||
return EntrySeries, nil
|
||||
|
||||
default:
|
||||
err = fmt.Errorf("%q %q is not a valid start token", t, string(p.l.cur()))
|
||||
err = p.parseError("expected a valid start token", t)
|
||||
}
|
||||
return EntryInvalid, err
|
||||
}
|
||||
|
||||
func (p *OpenMetricsParser) parseComment() error {
|
||||
// Validate the name of the metric. It must have _total or _bucket as
|
||||
// suffix for exemplars to be supported.
|
||||
if err := p.validateNameForExemplar(p.series[:p.offsets[0]-p.start]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var err error
|
||||
// Parse the labels.
|
||||
p.eOffsets, err = p.parseLVals(p.eOffsets)
|
||||
|
@ -395,19 +387,19 @@ func (p *OpenMetricsParser) parseComment() error {
|
|||
var ts float64
|
||||
// A float is enough to hold what we need for millisecond resolution.
|
||||
if ts, err = parseFloat(yoloString(p.l.buf()[1:])); err != nil {
|
||||
return err
|
||||
return fmt.Errorf("%v while parsing: %q", err, p.l.b[p.start:p.l.i])
|
||||
}
|
||||
if math.IsNaN(ts) || math.IsInf(ts, 0) {
|
||||
return errors.New("invalid exemplar timestamp")
|
||||
return fmt.Errorf("invalid exemplar timestamp %f", ts)
|
||||
}
|
||||
p.exemplarTs = int64(ts * 1000)
|
||||
switch t3 := p.nextToken(); t3 {
|
||||
case tLinebreak:
|
||||
default:
|
||||
return parseError("expected next entry after exemplar timestamp", t3)
|
||||
return p.parseError("expected next entry after exemplar timestamp", t3)
|
||||
}
|
||||
default:
|
||||
return parseError("expected timestamp or comment", t2)
|
||||
return p.parseError("expected timestamp or comment", t2)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@ -421,21 +413,21 @@ func (p *OpenMetricsParser) parseLVals(offsets []int) ([]int, error) {
|
|||
return offsets, nil
|
||||
case tComma:
|
||||
if first {
|
||||
return nil, parseError("expected label name or left brace", t)
|
||||
return nil, p.parseError("expected label name or left brace", t)
|
||||
}
|
||||
t = p.nextToken()
|
||||
if t != tLName {
|
||||
return nil, parseError("expected label name", t)
|
||||
return nil, p.parseError("expected label name", t)
|
||||
}
|
||||
case tLName:
|
||||
if !first {
|
||||
return nil, parseError("expected comma", t)
|
||||
return nil, p.parseError("expected comma", t)
|
||||
}
|
||||
default:
|
||||
if first {
|
||||
return nil, parseError("expected label name or left brace", t)
|
||||
return nil, p.parseError("expected label name or left brace", t)
|
||||
}
|
||||
return nil, parseError("expected comma or left brace", t)
|
||||
return nil, p.parseError("expected comma or left brace", t)
|
||||
|
||||
}
|
||||
first = false
|
||||
|
@ -444,13 +436,13 @@ func (p *OpenMetricsParser) parseLVals(offsets []int) ([]int, error) {
|
|||
offsets = append(offsets, p.l.start, p.l.i)
|
||||
|
||||
if t := p.nextToken(); t != tEqual {
|
||||
return nil, parseError("expected equal", t)
|
||||
return nil, p.parseError("expected equal", t)
|
||||
}
|
||||
if t := p.nextToken(); t != tLValue {
|
||||
return nil, parseError("expected label value", t)
|
||||
return nil, p.parseError("expected label value", t)
|
||||
}
|
||||
if !utf8.Valid(p.l.buf()) {
|
||||
return nil, errors.New("invalid UTF-8 label value")
|
||||
return nil, fmt.Errorf("invalid UTF-8 label value: %q", p.l.buf())
|
||||
}
|
||||
|
||||
// The openMetricsLexer ensures the value string is quoted. Strip first
|
||||
|
@ -461,11 +453,11 @@ func (p *OpenMetricsParser) parseLVals(offsets []int) ([]int, error) {
|
|||
|
||||
func (p *OpenMetricsParser) getFloatValue(t token, after string) (float64, error) {
|
||||
if t != tValue {
|
||||
return 0, parseError(fmt.Sprintf("expected value after %v", after), t)
|
||||
return 0, p.parseError(fmt.Sprintf("expected value after %v", after), t)
|
||||
}
|
||||
val, err := parseFloat(yoloString(p.l.buf()[1:]))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
return 0, fmt.Errorf("%v while parsing: %q", err, p.l.b[p.start:p.l.i])
|
||||
}
|
||||
// Ensure canonical NaN value.
|
||||
if math.IsNaN(p.exemplarVal) {
|
||||
|
@ -473,12 +465,3 @@ func (p *OpenMetricsParser) getFloatValue(t token, after string) (float64, error
|
|||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
func (p *OpenMetricsParser) validateNameForExemplar(name []byte) error {
|
||||
for _, suffix := range allowedSuffixes {
|
||||
if bytes.HasSuffix(name, suffix) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("metric name %v does not support exemplars", string(name))
|
||||
}
|
||||
|
|
|
@ -45,9 +45,14 @@ hh_bucket{le="+Inf"} 1
|
|||
# TYPE gh gaugehistogram
|
||||
gh_bucket{le="+Inf"} 1
|
||||
# TYPE hhh histogram
|
||||
hhh_bucket{le="+Inf"} 1 # {aa="bb"} 4
|
||||
hhh_bucket{le="+Inf"} 1 # {id="histogram-bucket-test"} 4
|
||||
hhh_count 1 # {id="histogram-count-test"} 4
|
||||
# TYPE ggh gaugehistogram
|
||||
ggh_bucket{le="+Inf"} 1 # {cc="dd",xx="yy"} 4 123.123
|
||||
ggh_bucket{le="+Inf"} 1 # {id="gaugehistogram-bucket-test",xx="yy"} 4 123.123
|
||||
ggh_count 1 # {id="gaugehistogram-count-test",xx="yy"} 4 123.123
|
||||
# TYPE smr_seconds summary
|
||||
smr_seconds_count 2.0 # {id="summary-count-test"} 1 123.321
|
||||
smr_seconds_sum 42.0 # {id="summary-sum-test"} 1 123.321
|
||||
# TYPE ii info
|
||||
ii{foo="bar"} 1
|
||||
# TYPE ss stateset
|
||||
|
@ -59,7 +64,7 @@ _metric_starting_with_underscore 1
|
|||
testmetric{_label_starting_with_underscore="foo"} 1
|
||||
testmetric{label="\"bar\""} 1
|
||||
# TYPE foo counter
|
||||
foo_total 17.0 1520879607.789 # {xx="yy"} 5`
|
||||
foo_total 17.0 1520879607.789 # {id="counter-test"} 5`
|
||||
|
||||
input += "\n# HELP metric foo\x00bar"
|
||||
input += "\nnull_byte_metric{a=\"abc\x00\"} 1"
|
||||
|
@ -152,7 +157,12 @@ foo_total 17.0 1520879607.789 # {xx="yy"} 5`
|
|||
m: `hhh_bucket{le="+Inf"}`,
|
||||
v: 1,
|
||||
lset: labels.FromStrings("__name__", "hhh_bucket", "le", "+Inf"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("aa", "bb"), Value: 4},
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "histogram-bucket-test"), Value: 4},
|
||||
}, {
|
||||
m: `hhh_count`,
|
||||
v: 1,
|
||||
lset: labels.FromStrings("__name__", "hhh_count"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "histogram-count-test"), Value: 4},
|
||||
}, {
|
||||
m: "ggh",
|
||||
typ: MetricTypeGaugeHistogram,
|
||||
|
@ -160,7 +170,25 @@ foo_total 17.0 1520879607.789 # {xx="yy"} 5`
|
|||
m: `ggh_bucket{le="+Inf"}`,
|
||||
v: 1,
|
||||
lset: labels.FromStrings("__name__", "ggh_bucket", "le", "+Inf"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("cc", "dd", "xx", "yy"), Value: 4, HasTs: true, Ts: 123123},
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "gaugehistogram-bucket-test", "xx", "yy"), Value: 4, HasTs: true, Ts: 123123},
|
||||
}, {
|
||||
m: `ggh_count`,
|
||||
v: 1,
|
||||
lset: labels.FromStrings("__name__", "ggh_count"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "gaugehistogram-count-test", "xx", "yy"), Value: 4, HasTs: true, Ts: 123123},
|
||||
}, {
|
||||
m: "smr_seconds",
|
||||
typ: MetricTypeSummary,
|
||||
}, {
|
||||
m: `smr_seconds_count`,
|
||||
v: 2,
|
||||
lset: labels.FromStrings("__name__", "smr_seconds_count"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "summary-count-test"), Value: 1, HasTs: true, Ts: 123321},
|
||||
}, {
|
||||
m: `smr_seconds_sum`,
|
||||
v: 42,
|
||||
lset: labels.FromStrings("__name__", "smr_seconds_sum"),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "summary-sum-test"), Value: 1, HasTs: true, Ts: 123321},
|
||||
}, {
|
||||
m: "ii",
|
||||
typ: MetricTypeInfo,
|
||||
|
@ -206,7 +234,7 @@ foo_total 17.0 1520879607.789 # {xx="yy"} 5`
|
|||
v: 17,
|
||||
lset: labels.FromStrings("__name__", "foo_total"),
|
||||
t: int64p(1520879607789),
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("xx", "yy"), Value: 5},
|
||||
e: &exemplar.Exemplar{Labels: labels.FromStrings("id", "counter-test"), Value: 5},
|
||||
}, {
|
||||
m: "metric",
|
||||
help: "foo\x00bar",
|
||||
|
@ -293,11 +321,11 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "\n",
|
||||
err: "\"INVALID\" \"\\n\" is not a valid start token",
|
||||
err: "expected a valid start token, got \"\\n\" (\"INVALID\") while parsing: \"\\n\"",
|
||||
},
|
||||
{
|
||||
input: "metric",
|
||||
err: "expected value after metric, got \"EOF\"",
|
||||
err: "expected value after metric, got \"metric\" (\"EOF\") while parsing: \"metric\"",
|
||||
},
|
||||
{
|
||||
input: "metric 1",
|
||||
|
@ -313,19 +341,19 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "a\n#EOF\n",
|
||||
err: "expected value after metric, got \"INVALID\"",
|
||||
err: "expected value after metric, got \"\\n\" (\"INVALID\") while parsing: \"a\\n\"",
|
||||
},
|
||||
{
|
||||
input: "\n\n#EOF\n",
|
||||
err: "\"INVALID\" \"\\n\" is not a valid start token",
|
||||
err: "expected a valid start token, got \"\\n\" (\"INVALID\") while parsing: \"\\n\"",
|
||||
},
|
||||
{
|
||||
input: " a 1\n#EOF\n",
|
||||
err: "\"INVALID\" \" \" is not a valid start token",
|
||||
err: "expected a valid start token, got \" \" (\"INVALID\") while parsing: \" \"",
|
||||
},
|
||||
{
|
||||
input: "9\n#EOF\n",
|
||||
err: "\"INVALID\" \"9\" is not a valid start token",
|
||||
err: "expected a valid start token, got \"9\" (\"INVALID\") while parsing: \"9\"",
|
||||
},
|
||||
{
|
||||
input: "# TYPE u untyped\n#EOF\n",
|
||||
|
@ -337,11 +365,11 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "# TYPE c counter\n#EOF\n",
|
||||
err: "\"INVALID\" \" \" is not a valid start token",
|
||||
err: "expected a valid start token, got \"# \" (\"INVALID\") while parsing: \"# \"",
|
||||
},
|
||||
{
|
||||
input: "# TYPE \n#EOF\n",
|
||||
err: "expected metric name after TYPE, got \"INVALID\"",
|
||||
err: "expected metric name after TYPE, got \"\\n\" (\"INVALID\") while parsing: \"# TYPE \\n\"",
|
||||
},
|
||||
{
|
||||
input: "# TYPE m\n#EOF\n",
|
||||
|
@ -349,19 +377,19 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "# UNIT metric suffix\n#EOF\n",
|
||||
err: "unit not a suffix of metric \"metric\"",
|
||||
err: "unit \"suffix\" not a suffix of metric \"metric\"",
|
||||
},
|
||||
{
|
||||
input: "# UNIT metricsuffix suffix\n#EOF\n",
|
||||
err: "unit not a suffix of metric \"metricsuffix\"",
|
||||
err: "unit \"suffix\" not a suffix of metric \"metricsuffix\"",
|
||||
},
|
||||
{
|
||||
input: "# UNIT m suffix\n#EOF\n",
|
||||
err: "unit not a suffix of metric \"m\"",
|
||||
err: "unit \"suffix\" not a suffix of metric \"m\"",
|
||||
},
|
||||
{
|
||||
input: "# UNIT \n#EOF\n",
|
||||
err: "expected metric name after UNIT, got \"INVALID\"",
|
||||
err: "expected metric name after UNIT, got \"\\n\" (\"INVALID\") while parsing: \"# UNIT \\n\"",
|
||||
},
|
||||
{
|
||||
input: "# UNIT m\n#EOF\n",
|
||||
|
@ -369,7 +397,7 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "# HELP \n#EOF\n",
|
||||
err: "expected metric name after HELP, got \"INVALID\"",
|
||||
err: "expected metric name after HELP, got \"\\n\" (\"INVALID\") while parsing: \"# HELP \\n\"",
|
||||
},
|
||||
{
|
||||
input: "# HELP m\n#EOF\n",
|
||||
|
@ -377,27 +405,27 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "a\t1\n#EOF\n",
|
||||
err: "expected value after metric, got \"INVALID\"",
|
||||
err: "expected value after metric, got \"\\t\" (\"INVALID\") while parsing: \"a\\t\"",
|
||||
},
|
||||
{
|
||||
input: "a 1\t2\n#EOF\n",
|
||||
err: "strconv.ParseFloat: parsing \"1\\t2\": invalid syntax",
|
||||
err: "strconv.ParseFloat: parsing \"1\\t2\": invalid syntax while parsing: \"a 1\\t2\"",
|
||||
},
|
||||
{
|
||||
input: "a 1 2 \n#EOF\n",
|
||||
err: "expected next entry after timestamp, got \"INVALID\"",
|
||||
err: "expected next entry after timestamp, got \" \\n\" (\"INVALID\") while parsing: \"a 1 2 \\n\"",
|
||||
},
|
||||
{
|
||||
input: "a 1 2 #\n#EOF\n",
|
||||
err: "expected next entry after timestamp, got \"TIMESTAMP\"",
|
||||
err: "expected next entry after timestamp, got \" #\\n\" (\"TIMESTAMP\") while parsing: \"a 1 2 #\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a 1 1z\n#EOF\n",
|
||||
err: "strconv.ParseFloat: parsing \"1z\": invalid syntax",
|
||||
err: "strconv.ParseFloat: parsing \"1z\": invalid syntax while parsing: \"a 1 1z\"",
|
||||
},
|
||||
{
|
||||
input: " # EOF\n",
|
||||
err: "\"INVALID\" \" \" is not a valid start token",
|
||||
err: "expected a valid start token, got \" \" (\"INVALID\") while parsing: \" \"",
|
||||
},
|
||||
{
|
||||
input: "# EOF\na 1",
|
||||
|
@ -413,7 +441,7 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "#\tTYPE c counter\n",
|
||||
err: "\"INVALID\" \"\\t\" is not a valid start token",
|
||||
err: "expected a valid start token, got \"#\\t\" (\"INVALID\") while parsing: \"#\\t\"",
|
||||
},
|
||||
{
|
||||
input: "# TYPE c counter\n",
|
||||
|
@ -421,135 +449,131 @@ func TestOpenMetricsParseErrors(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "a 1 1 1\n# EOF\n",
|
||||
err: "expected next entry after timestamp, got \"TIMESTAMP\"",
|
||||
err: "expected next entry after timestamp, got \" 1\\n\" (\"TIMESTAMP\") while parsing: \"a 1 1 1\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a{b='c'} 1\n# EOF\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"'\" (\"INVALID\") while parsing: \"a{b='\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"c\",} 1\n# EOF\n",
|
||||
err: "expected label name, got \"BCLOSE\"",
|
||||
err: "expected label name, got \"} \" (\"BCLOSE\") while parsing: \"a{b=\\\"c\\\",} \"",
|
||||
},
|
||||
{
|
||||
input: "a{,b=\"c\"} 1\n# EOF\n",
|
||||
err: "expected label name or left brace, got \"COMMA\"",
|
||||
err: "expected label name or left brace, got \",b\" (\"COMMA\") while parsing: \"a{,b\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"c\"d=\"e\"} 1\n# EOF\n",
|
||||
err: "expected comma, got \"LNAME\"",
|
||||
err: "expected comma, got \"d=\" (\"LNAME\") while parsing: \"a{b=\\\"c\\\"d=\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"c\",,d=\"e\"} 1\n# EOF\n",
|
||||
err: "expected label name, got \"COMMA\"",
|
||||
err: "expected label name, got \",d\" (\"COMMA\") while parsing: \"a{b=\\\"c\\\",,d\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\n# EOF\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\n\" (\"INVALID\") while parsing: \"a{b=\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a{\xff=\"foo\"} 1\n# EOF\n",
|
||||
err: "expected label name or left brace, got \"INVALID\"",
|
||||
err: "expected label name or left brace, got \"\\xff\" (\"INVALID\") while parsing: \"a{\\xff\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"\xff\"} 1\n# EOF\n",
|
||||
err: "invalid UTF-8 label value",
|
||||
err: "invalid UTF-8 label value: \"\\\"\\xff\\\"\"",
|
||||
},
|
||||
{
|
||||
input: "a true\n",
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax",
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax while parsing: \"a true\"",
|
||||
},
|
||||
{
|
||||
input: "something_weird{problem=\"\n# EOF\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\"\\n\" (\"INVALID\") while parsing: \"something_weird{problem=\\\"\\n\"",
|
||||
},
|
||||
{
|
||||
input: "empty_label_name{=\"\"} 0\n# EOF\n",
|
||||
err: "expected label name or left brace, got \"EQUAL\"",
|
||||
err: "expected label name or left brace, got \"=\\\"\" (\"EQUAL\") while parsing: \"empty_label_name{=\\\"\"",
|
||||
},
|
||||
{
|
||||
input: "foo 1_2\n\n# EOF\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 1_2\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0x1p-3\n\n# EOF\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 0x1p-3\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0x1P-3\n\n# EOF\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 0x1P-3\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0 1_2\n\n# EOF\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 0 1_2\"",
|
||||
},
|
||||
{
|
||||
input: "custom_metric_total 1 # {aa=bb}\n# EOF\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"b\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {aa=b\"",
|
||||
},
|
||||
{
|
||||
input: "custom_metric_total 1 # {aa=\"bb\"}\n# EOF\n",
|
||||
err: "expected value after exemplar labels, got \"INVALID\"",
|
||||
err: "expected value after exemplar labels, got \"\\n\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\"}\\n\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb"}`,
|
||||
err: "expected value after exemplar labels, got \"EOF\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric 1 # {aa="bb"}`,
|
||||
err: "metric name custom_metric does not support exemplars",
|
||||
err: "expected value after exemplar labels, got \"}\" (\"EOF\") while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\"}\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb",,cc="dd"} 1`,
|
||||
err: "expected label name, got \"COMMA\"",
|
||||
err: "expected label name, got \",c\" (\"COMMA\") while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\",,c\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb"} 1_2`,
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\"} 1_2\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb"} 0x1p-3`,
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\"} 0x1p-3\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb"} true`,
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax",
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\"} true\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa="bb",cc=}`,
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"}\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {aa=\\\"bb\\\",cc=}\"",
|
||||
},
|
||||
{
|
||||
input: `custom_metric_total 1 # {aa=\"\xff\"} 9.0`,
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\\\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {aa=\\\\\"",
|
||||
},
|
||||
{
|
||||
input: `{b="c",} 1`,
|
||||
err: `"INVALID" "{" is not a valid start token`,
|
||||
err: "expected a valid start token, got \"{\" (\"INVALID\") while parsing: \"{\"",
|
||||
},
|
||||
{
|
||||
input: `a 1 NaN`,
|
||||
err: `invalid timestamp`,
|
||||
err: `invalid timestamp NaN`,
|
||||
},
|
||||
{
|
||||
input: `a 1 -Inf`,
|
||||
err: `invalid timestamp`,
|
||||
err: `invalid timestamp -Inf`,
|
||||
},
|
||||
{
|
||||
input: `a 1 Inf`,
|
||||
err: `invalid timestamp`,
|
||||
err: `invalid timestamp +Inf`,
|
||||
},
|
||||
{
|
||||
input: "# TYPE hhh histogram\nhhh_bucket{le=\"+Inf\"} 1 # {aa=\"bb\"} 4 NaN",
|
||||
err: `invalid exemplar timestamp`,
|
||||
err: `invalid exemplar timestamp NaN`,
|
||||
},
|
||||
{
|
||||
input: "# TYPE hhh histogram\nhhh_bucket{le=\"+Inf\"} 1 # {aa=\"bb\"} 4 -Inf",
|
||||
err: `invalid exemplar timestamp`,
|
||||
err: `invalid exemplar timestamp -Inf`,
|
||||
},
|
||||
{
|
||||
input: "# TYPE hhh histogram\nhhh_bucket{le=\"+Inf\"} 1 # {aa=\"bb\"} 4 Inf",
|
||||
err: `invalid exemplar timestamp`,
|
||||
err: `invalid exemplar timestamp +Inf`,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -586,35 +610,35 @@ func TestOMNullByteHandling(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "a{b=\x00\"ssss\"} 1\n# EOF\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\x00\" (\"INVALID\") while parsing: \"a{b=\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"\x00",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\"\\x00\" (\"INVALID\") while parsing: \"a{b=\\\"\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a{b\x00=\"hiih\"} 1",
|
||||
err: "expected equal, got \"INVALID\"",
|
||||
err: "expected equal, got \"\\x00\" (\"INVALID\") while parsing: \"a{b\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a\x00{b=\"ddd\"} 1",
|
||||
err: "expected value after metric, got \"INVALID\"",
|
||||
err: "expected value after metric, got \"\\x00\" (\"INVALID\") while parsing: \"a\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "#",
|
||||
err: "\"INVALID\" \" \" is not a valid start token",
|
||||
err: "expected a valid start token, got \"#\" (\"INVALID\") while parsing: \"#\"",
|
||||
},
|
||||
{
|
||||
input: "# H",
|
||||
err: "\"INVALID\" \" \" is not a valid start token",
|
||||
err: "expected a valid start token, got \"# H\" (\"INVALID\") while parsing: \"# H\"",
|
||||
},
|
||||
{
|
||||
input: "custom_metric_total 1 # {b=\x00\"ssss\"} 1\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\x00\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {b=\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "custom_metric_total 1 # {b=\"\x00ss\"} 1\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\"\\x00\" (\"INVALID\") while parsing: \"custom_metric_total 1 # {b=\\\"\\x00\"",
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -254,8 +254,12 @@ func (p *PromParser) nextToken() token {
|
|||
}
|
||||
}
|
||||
|
||||
func parseError(exp string, got token) error {
|
||||
return fmt.Errorf("%s, got %q", exp, got)
|
||||
func (p *PromParser) parseError(exp string, got token) error {
|
||||
e := p.l.i + 1
|
||||
if len(p.l.b) < e {
|
||||
e = len(p.l.b)
|
||||
}
|
||||
return fmt.Errorf("%s, got %q (%q) while parsing: %q", exp, p.l.b[p.l.start:e], got, p.l.b[p.start:e])
|
||||
}
|
||||
|
||||
// Next advances the parser to the next sample. It returns false if no
|
||||
|
@ -278,7 +282,7 @@ func (p *PromParser) Next() (Entry, error) {
|
|||
case tMName:
|
||||
p.offsets = append(p.offsets, p.l.start, p.l.i)
|
||||
default:
|
||||
return EntryInvalid, parseError("expected metric name after "+t.String(), t2)
|
||||
return EntryInvalid, p.parseError("expected metric name after "+t.String(), t2)
|
||||
}
|
||||
switch t2 := p.nextToken(); t2 {
|
||||
case tText:
|
||||
|
@ -308,11 +312,11 @@ func (p *PromParser) Next() (Entry, error) {
|
|||
}
|
||||
case tHelp:
|
||||
if !utf8.Valid(p.text) {
|
||||
return EntryInvalid, fmt.Errorf("help text is not a valid utf8 string")
|
||||
return EntryInvalid, fmt.Errorf("help text %q is not a valid utf8 string", p.text)
|
||||
}
|
||||
}
|
||||
if t := p.nextToken(); t != tLinebreak {
|
||||
return EntryInvalid, parseError("linebreak expected after metadata", t)
|
||||
return EntryInvalid, p.parseError("linebreak expected after metadata", t)
|
||||
}
|
||||
switch t {
|
||||
case tHelp:
|
||||
|
@ -323,7 +327,7 @@ func (p *PromParser) Next() (Entry, error) {
|
|||
case tComment:
|
||||
p.text = p.l.buf()
|
||||
if t := p.nextToken(); t != tLinebreak {
|
||||
return EntryInvalid, parseError("linebreak expected after comment", t)
|
||||
return EntryInvalid, p.parseError("linebreak expected after comment", t)
|
||||
}
|
||||
return EntryComment, nil
|
||||
|
||||
|
@ -340,10 +344,10 @@ func (p *PromParser) Next() (Entry, error) {
|
|||
t2 = p.nextToken()
|
||||
}
|
||||
if t2 != tValue {
|
||||
return EntryInvalid, parseError("expected value after metric", t2)
|
||||
return EntryInvalid, p.parseError("expected value after metric", t2)
|
||||
}
|
||||
if p.val, err = parseFloat(yoloString(p.l.buf())); err != nil {
|
||||
return EntryInvalid, err
|
||||
return EntryInvalid, fmt.Errorf("%v while parsing: %q", err, p.l.b[p.start:p.l.i])
|
||||
}
|
||||
// Ensure canonical NaN value.
|
||||
if math.IsNaN(p.val) {
|
||||
|
@ -356,18 +360,18 @@ func (p *PromParser) Next() (Entry, error) {
|
|||
case tTimestamp:
|
||||
p.hasTS = true
|
||||
if p.ts, err = strconv.ParseInt(yoloString(p.l.buf()), 10, 64); err != nil {
|
||||
return EntryInvalid, err
|
||||
return EntryInvalid, fmt.Errorf("%v while parsing: %q", err, p.l.b[p.start:p.l.i])
|
||||
}
|
||||
if t2 := p.nextToken(); t2 != tLinebreak {
|
||||
return EntryInvalid, parseError("expected next entry after timestamp", t2)
|
||||
return EntryInvalid, p.parseError("expected next entry after timestamp", t2)
|
||||
}
|
||||
default:
|
||||
return EntryInvalid, parseError("expected timestamp or new record", t)
|
||||
return EntryInvalid, p.parseError("expected timestamp or new record", t)
|
||||
}
|
||||
return EntrySeries, nil
|
||||
|
||||
default:
|
||||
err = fmt.Errorf("%q is not a valid start token", t)
|
||||
err = p.parseError("expected a valid start token", t)
|
||||
}
|
||||
return EntryInvalid, err
|
||||
}
|
||||
|
@ -380,18 +384,18 @@ func (p *PromParser) parseLVals() error {
|
|||
return nil
|
||||
case tLName:
|
||||
default:
|
||||
return parseError("expected label name", t)
|
||||
return p.parseError("expected label name", t)
|
||||
}
|
||||
p.offsets = append(p.offsets, p.l.start, p.l.i)
|
||||
|
||||
if t := p.nextToken(); t != tEqual {
|
||||
return parseError("expected equal", t)
|
||||
return p.parseError("expected equal", t)
|
||||
}
|
||||
if t := p.nextToken(); t != tLValue {
|
||||
return parseError("expected label value", t)
|
||||
return p.parseError("expected label value", t)
|
||||
}
|
||||
if !utf8.Valid(p.l.buf()) {
|
||||
return fmt.Errorf("invalid UTF-8 label value")
|
||||
return fmt.Errorf("invalid UTF-8 label value: %q", p.l.buf())
|
||||
}
|
||||
|
||||
// The promlexer ensures the value string is quoted. Strip first
|
||||
|
|
|
@ -219,63 +219,63 @@ func TestPromParseErrors(t *testing.T) {
|
|||
}{
|
||||
{
|
||||
input: "a",
|
||||
err: "expected value after metric, got \"INVALID\"",
|
||||
err: "expected value after metric, got \"\\n\" (\"INVALID\") while parsing: \"a\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a{b='c'} 1\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"'\" (\"INVALID\") while parsing: \"a{b='\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\n\" (\"INVALID\") while parsing: \"a{b=\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a{\xff=\"foo\"} 1\n",
|
||||
err: "expected label name, got \"INVALID\"",
|
||||
err: "expected label name, got \"\\xff\" (\"INVALID\") while parsing: \"a{\\xff\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"\xff\"} 1\n",
|
||||
err: "invalid UTF-8 label value",
|
||||
err: "invalid UTF-8 label value: \"\\\"\\xff\\\"\"",
|
||||
},
|
||||
{
|
||||
input: "a true\n",
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax",
|
||||
err: "strconv.ParseFloat: parsing \"true\": invalid syntax while parsing: \"a true\"",
|
||||
},
|
||||
{
|
||||
input: "something_weird{problem=\"",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\"\\n\" (\"INVALID\") while parsing: \"something_weird{problem=\\\"\\n\"",
|
||||
},
|
||||
{
|
||||
input: "empty_label_name{=\"\"} 0",
|
||||
err: "expected label name, got \"EQUAL\"",
|
||||
err: "expected label name, got \"=\\\"\" (\"EQUAL\") while parsing: \"empty_label_name{=\\\"\"",
|
||||
},
|
||||
{
|
||||
input: "foo 1_2\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 1_2\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0x1p-3\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 0x1p-3\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0x1P-3\n",
|
||||
err: "unsupported character in float",
|
||||
err: "unsupported character in float while parsing: \"foo 0x1P-3\"",
|
||||
},
|
||||
{
|
||||
input: "foo 0 1_2\n",
|
||||
err: "expected next entry after timestamp, got \"INVALID\"",
|
||||
err: "expected next entry after timestamp, got \"_\" (\"INVALID\") while parsing: \"foo 0 1_\"",
|
||||
},
|
||||
{
|
||||
input: `{a="ok"} 1`,
|
||||
err: `"INVALID" is not a valid start token`,
|
||||
err: "expected a valid start token, got \"{\" (\"INVALID\") while parsing: \"{\"",
|
||||
},
|
||||
{
|
||||
input: "# TYPE #\n#EOF\n",
|
||||
err: "expected metric name after TYPE, got \"INVALID\"",
|
||||
err: "expected metric name after TYPE, got \"#\" (\"INVALID\") while parsing: \"# TYPE #\"",
|
||||
},
|
||||
{
|
||||
input: "# HELP #\n#EOF\n",
|
||||
err: "expected metric name after HELP, got \"INVALID\"",
|
||||
err: "expected metric name after HELP, got \"#\" (\"INVALID\") while parsing: \"# HELP #\"",
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -313,23 +313,23 @@ func TestPromNullByteHandling(t *testing.T) {
|
|||
},
|
||||
{
|
||||
input: "a{b=\x00\"ssss\"} 1\n",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\x00\" (\"INVALID\") while parsing: \"a{b=\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a{b=\"\x00",
|
||||
err: "expected label value, got \"INVALID\"",
|
||||
err: "expected label value, got \"\\\"\\x00\\n\" (\"INVALID\") while parsing: \"a{b=\\\"\\x00\\n\"",
|
||||
},
|
||||
{
|
||||
input: "a{b\x00=\"hiih\"} 1",
|
||||
err: "expected equal, got \"INVALID\"",
|
||||
err: "expected equal, got \"\\x00\" (\"INVALID\") while parsing: \"a{b\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a\x00{b=\"ddd\"} 1",
|
||||
err: "expected value after metric, got \"INVALID\"",
|
||||
err: "expected value after metric, got \"\\x00\" (\"INVALID\") while parsing: \"a\\x00\"",
|
||||
},
|
||||
{
|
||||
input: "a 0 1\x00",
|
||||
err: "expected next entry after timestamp, got \"INVALID\"",
|
||||
err: "expected next entry after timestamp, got \"\\x00\" (\"INVALID\") while parsing: \"a 0 1\\x00\"",
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
math "math"
|
||||
math_bits "math/bits"
|
||||
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
types "github.com/gogo/protobuf/types"
|
||||
)
|
||||
|
@ -281,12 +282,12 @@ func (m *Quantile) GetValue() float64 {
|
|||
}
|
||||
|
||||
type Summary struct {
|
||||
SampleCount uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount,proto3" json:"sample_count,omitempty"`
|
||||
SampleSum float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum,proto3" json:"sample_sum,omitempty"`
|
||||
Quantile []*Quantile `protobuf:"bytes,3,rep,name=quantile,proto3" json:"quantile,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
SampleCount uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount,proto3" json:"sample_count,omitempty"`
|
||||
SampleSum float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum,proto3" json:"sample_sum,omitempty"`
|
||||
Quantile []Quantile `protobuf:"bytes,3,rep,name=quantile,proto3" json:"quantile"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Summary) Reset() { *m = Summary{} }
|
||||
|
@ -336,7 +337,7 @@ func (m *Summary) GetSampleSum() float64 {
|
|||
return 0
|
||||
}
|
||||
|
||||
func (m *Summary) GetQuantile() []*Quantile {
|
||||
func (m *Summary) GetQuantile() []Quantile {
|
||||
if m != nil {
|
||||
return m.Quantile
|
||||
}
|
||||
|
@ -395,7 +396,7 @@ type Histogram struct {
|
|||
SampleCountFloat float64 `protobuf:"fixed64,4,opt,name=sample_count_float,json=sampleCountFloat,proto3" json:"sample_count_float,omitempty"`
|
||||
SampleSum float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum,proto3" json:"sample_sum,omitempty"`
|
||||
// Buckets for the conventional histogram.
|
||||
Bucket []*Bucket `protobuf:"bytes,3,rep,name=bucket,proto3" json:"bucket,omitempty"`
|
||||
Bucket []Bucket `protobuf:"bytes,3,rep,name=bucket,proto3" json:"bucket"`
|
||||
// schema defines the bucket schema. Currently, valid numbers are -4 <= n <= 8.
|
||||
// They are all for base-2 bucket schemas, where 1 is a bucket boundary in each case, and
|
||||
// then each power of two is divided into 2^n logarithmic buckets.
|
||||
|
@ -406,14 +407,14 @@ type Histogram struct {
|
|||
ZeroCount uint64 `protobuf:"varint,7,opt,name=zero_count,json=zeroCount,proto3" json:"zero_count,omitempty"`
|
||||
ZeroCountFloat float64 `protobuf:"fixed64,8,opt,name=zero_count_float,json=zeroCountFloat,proto3" json:"zero_count_float,omitempty"`
|
||||
// Negative buckets for the native histogram.
|
||||
NegativeSpan []*BucketSpan `protobuf:"bytes,9,rep,name=negative_span,json=negativeSpan,proto3" json:"negative_span,omitempty"`
|
||||
NegativeSpan []BucketSpan `protobuf:"bytes,9,rep,name=negative_span,json=negativeSpan,proto3" json:"negative_span"`
|
||||
// Use either "negative_delta" or "negative_count", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
NegativeDelta []int64 `protobuf:"zigzag64,10,rep,packed,name=negative_delta,json=negativeDelta,proto3" json:"negative_delta,omitempty"`
|
||||
NegativeCount []float64 `protobuf:"fixed64,11,rep,packed,name=negative_count,json=negativeCount,proto3" json:"negative_count,omitempty"`
|
||||
// Positive buckets for the native histogram.
|
||||
PositiveSpan []*BucketSpan `protobuf:"bytes,12,rep,name=positive_span,json=positiveSpan,proto3" json:"positive_span,omitempty"`
|
||||
PositiveSpan []BucketSpan `protobuf:"bytes,12,rep,name=positive_span,json=positiveSpan,proto3" json:"positive_span"`
|
||||
// Use either "positive_delta" or "positive_count", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
|
@ -478,7 +479,7 @@ func (m *Histogram) GetSampleSum() float64 {
|
|||
return 0
|
||||
}
|
||||
|
||||
func (m *Histogram) GetBucket() []*Bucket {
|
||||
func (m *Histogram) GetBucket() []Bucket {
|
||||
if m != nil {
|
||||
return m.Bucket
|
||||
}
|
||||
|
@ -513,7 +514,7 @@ func (m *Histogram) GetZeroCountFloat() float64 {
|
|||
return 0
|
||||
}
|
||||
|
||||
func (m *Histogram) GetNegativeSpan() []*BucketSpan {
|
||||
func (m *Histogram) GetNegativeSpan() []BucketSpan {
|
||||
if m != nil {
|
||||
return m.NegativeSpan
|
||||
}
|
||||
|
@ -534,7 +535,7 @@ func (m *Histogram) GetNegativeCount() []float64 {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (m *Histogram) GetPositiveSpan() []*BucketSpan {
|
||||
func (m *Histogram) GetPositiveSpan() []BucketSpan {
|
||||
if m != nil {
|
||||
return m.PositiveSpan
|
||||
}
|
||||
|
@ -688,7 +689,7 @@ func (m *BucketSpan) GetLength() uint32 {
|
|||
}
|
||||
|
||||
type Exemplar struct {
|
||||
Label []*LabelPair `protobuf:"bytes,1,rep,name=label,proto3" json:"label,omitempty"`
|
||||
Label []LabelPair `protobuf:"bytes,1,rep,name=label,proto3" json:"label"`
|
||||
Value float64 `protobuf:"fixed64,2,opt,name=value,proto3" json:"value,omitempty"`
|
||||
Timestamp *types.Timestamp `protobuf:"bytes,3,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
|
@ -729,7 +730,7 @@ func (m *Exemplar) XXX_DiscardUnknown() {
|
|||
|
||||
var xxx_messageInfo_Exemplar proto.InternalMessageInfo
|
||||
|
||||
func (m *Exemplar) GetLabel() []*LabelPair {
|
||||
func (m *Exemplar) GetLabel() []LabelPair {
|
||||
if m != nil {
|
||||
return m.Label
|
||||
}
|
||||
|
@ -751,16 +752,16 @@ func (m *Exemplar) GetTimestamp() *types.Timestamp {
|
|||
}
|
||||
|
||||
type Metric struct {
|
||||
Label []*LabelPair `protobuf:"bytes,1,rep,name=label,proto3" json:"label,omitempty"`
|
||||
Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge,proto3" json:"gauge,omitempty"`
|
||||
Counter *Counter `protobuf:"bytes,3,opt,name=counter,proto3" json:"counter,omitempty"`
|
||||
Summary *Summary `protobuf:"bytes,4,opt,name=summary,proto3" json:"summary,omitempty"`
|
||||
Untyped *Untyped `protobuf:"bytes,5,opt,name=untyped,proto3" json:"untyped,omitempty"`
|
||||
Histogram *Histogram `protobuf:"bytes,7,opt,name=histogram,proto3" json:"histogram,omitempty"`
|
||||
TimestampMs int64 `protobuf:"varint,6,opt,name=timestamp_ms,json=timestampMs,proto3" json:"timestamp_ms,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
Label []LabelPair `protobuf:"bytes,1,rep,name=label,proto3" json:"label"`
|
||||
Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge,proto3" json:"gauge,omitempty"`
|
||||
Counter *Counter `protobuf:"bytes,3,opt,name=counter,proto3" json:"counter,omitempty"`
|
||||
Summary *Summary `protobuf:"bytes,4,opt,name=summary,proto3" json:"summary,omitempty"`
|
||||
Untyped *Untyped `protobuf:"bytes,5,opt,name=untyped,proto3" json:"untyped,omitempty"`
|
||||
Histogram *Histogram `protobuf:"bytes,7,opt,name=histogram,proto3" json:"histogram,omitempty"`
|
||||
TimestampMs int64 `protobuf:"varint,6,opt,name=timestamp_ms,json=timestampMs,proto3" json:"timestamp_ms,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Metric) Reset() { *m = Metric{} }
|
||||
|
@ -796,7 +797,7 @@ func (m *Metric) XXX_DiscardUnknown() {
|
|||
|
||||
var xxx_messageInfo_Metric proto.InternalMessageInfo
|
||||
|
||||
func (m *Metric) GetLabel() []*LabelPair {
|
||||
func (m *Metric) GetLabel() []LabelPair {
|
||||
if m != nil {
|
||||
return m.Label
|
||||
}
|
||||
|
@ -849,7 +850,7 @@ type MetricFamily struct {
|
|||
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
|
||||
Help string `protobuf:"bytes,2,opt,name=help,proto3" json:"help,omitempty"`
|
||||
Type MetricType `protobuf:"varint,3,opt,name=type,proto3,enum=io.prometheus.client.MetricType" json:"type,omitempty"`
|
||||
Metric []*Metric `protobuf:"bytes,4,rep,name=metric,proto3" json:"metric,omitempty"`
|
||||
Metric []Metric `protobuf:"bytes,4,rep,name=metric,proto3" json:"metric"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
|
@ -909,7 +910,7 @@ func (m *MetricFamily) GetType() MetricType {
|
|||
return MetricType_COUNTER
|
||||
}
|
||||
|
||||
func (m *MetricFamily) GetMetric() []*Metric {
|
||||
func (m *MetricFamily) GetMetric() []Metric {
|
||||
if m != nil {
|
||||
return m.Metric
|
||||
}
|
||||
|
@ -937,63 +938,65 @@ func init() {
|
|||
}
|
||||
|
||||
var fileDescriptor_d1e5ddb18987a258 = []byte{
|
||||
// 894 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x55, 0xdd, 0x6e, 0xe3, 0x44,
|
||||
0x18, 0xc5, 0xcd, 0xaf, 0xbf, 0x34, 0xdd, 0xec, 0x50, 0xad, 0xac, 0x42, 0xdb, 0x60, 0x09, 0xa9,
|
||||
0x20, 0xe4, 0x08, 0xe8, 0x0a, 0x84, 0xe0, 0xa2, 0xdd, 0xcd, 0x76, 0x91, 0xc8, 0xee, 0x32, 0x49,
|
||||
0x2e, 0x16, 0x2e, 0xac, 0x49, 0x3a, 0x4d, 0x2c, 0x3c, 0x1e, 0x63, 0x8f, 0x57, 0x94, 0x17, 0xe0,
|
||||
0x9a, 0x57, 0xe0, 0x61, 0x10, 0x97, 0x3c, 0x02, 0x2a, 0x0f, 0x02, 0x9a, 0x3f, 0xbb, 0x59, 0x39,
|
||||
0xcb, 0xb2, 0x77, 0x99, 0xe3, 0x73, 0xbe, 0x39, 0x67, 0x3c, 0x39, 0x06, 0x3f, 0xe2, 0xa3, 0x34,
|
||||
0xe3, 0x8c, 0x8a, 0x35, 0x2d, 0xf2, 0xd1, 0x32, 0x8e, 0x68, 0x22, 0x46, 0x8c, 0x8a, 0x2c, 0x5a,
|
||||
0xe6, 0x41, 0x9a, 0x71, 0xc1, 0xd1, 0x7e, 0xc4, 0x83, 0x8a, 0x13, 0x68, 0xce, 0xc1, 0xf1, 0x8a,
|
||||
0xf3, 0x55, 0x4c, 0x47, 0x8a, 0xb3, 0x28, 0xae, 0x46, 0x22, 0x62, 0x34, 0x17, 0x84, 0xa5, 0x5a,
|
||||
0xe6, 0xdf, 0x07, 0xf7, 0x1b, 0xb2, 0xa0, 0xf1, 0x33, 0x12, 0x65, 0x08, 0x41, 0x33, 0x21, 0x8c,
|
||||
0x7a, 0xce, 0xd0, 0x39, 0x71, 0xb1, 0xfa, 0x8d, 0xf6, 0xa1, 0xf5, 0x82, 0xc4, 0x05, 0xf5, 0x76,
|
||||
0x14, 0xa8, 0x17, 0xfe, 0x21, 0xb4, 0x2e, 0x48, 0xb1, 0xba, 0xf5, 0x58, 0x6a, 0x1c, 0xfb, 0xf8,
|
||||
0x7b, 0xe8, 0x3c, 0xe0, 0x45, 0x22, 0x68, 0x56, 0x4f, 0x40, 0x5f, 0x40, 0x97, 0xfe, 0x44, 0x59,
|
||||
0x1a, 0x93, 0x4c, 0x0d, 0xee, 0x7d, 0x72, 0x14, 0xd4, 0x05, 0x08, 0xc6, 0x86, 0x85, 0x4b, 0xbe,
|
||||
0xff, 0x25, 0x74, 0xbf, 0x2d, 0x48, 0x22, 0xa2, 0x98, 0xa2, 0x03, 0xe8, 0xfe, 0x68, 0x7e, 0x9b,
|
||||
0x0d, 0xca, 0xf5, 0xa6, 0xf3, 0xd2, 0xda, 0x2f, 0x0e, 0x74, 0xa6, 0x05, 0x63, 0x24, 0xbb, 0x46,
|
||||
0xef, 0xc1, 0x6e, 0x4e, 0x58, 0x1a, 0xd3, 0x70, 0x29, 0xdd, 0xaa, 0x09, 0x4d, 0xdc, 0xd3, 0x98,
|
||||
0x0a, 0x80, 0x0e, 0x01, 0x0c, 0x25, 0x2f, 0x98, 0x99, 0xe4, 0x6a, 0x64, 0x5a, 0x30, 0x99, 0xa3,
|
||||
0xdc, 0xbf, 0x31, 0x6c, 0x6c, 0xcf, 0x61, 0x1d, 0x57, 0xfe, 0xfc, 0x63, 0xe8, 0xcc, 0x13, 0x71,
|
||||
0x9d, 0xd2, 0xcb, 0x2d, 0xa7, 0xf8, 0x77, 0x13, 0xdc, 0xc7, 0x51, 0x2e, 0xf8, 0x2a, 0x23, 0xec,
|
||||
0x75, 0xcc, 0x7e, 0x04, 0xe8, 0x36, 0x25, 0xbc, 0x8a, 0x39, 0x11, 0x5e, 0x53, 0xcd, 0x1c, 0xdc,
|
||||
0x22, 0x3e, 0x92, 0xf8, 0x7f, 0x45, 0x3b, 0x85, 0xf6, 0xa2, 0x58, 0xfe, 0x40, 0x85, 0x09, 0xf6,
|
||||
0x6e, 0x7d, 0xb0, 0x73, 0xc5, 0xc1, 0x86, 0x8b, 0xee, 0x41, 0x3b, 0x5f, 0xae, 0x29, 0x23, 0x5e,
|
||||
0x6b, 0xe8, 0x9c, 0xdc, 0xc5, 0x66, 0x85, 0xde, 0x87, 0xbd, 0x9f, 0x69, 0xc6, 0x43, 0xb1, 0xce,
|
||||
0x68, 0xbe, 0xe6, 0xf1, 0xa5, 0xd7, 0x56, 0x1b, 0xf6, 0x25, 0x3a, 0xb3, 0xa0, 0xf4, 0xa4, 0x68,
|
||||
0x3a, 0x62, 0x47, 0x45, 0x74, 0x25, 0xa2, 0x03, 0x9e, 0xc0, 0xa0, 0x7a, 0x6c, 0xe2, 0x75, 0xd5,
|
||||
0x9c, 0xbd, 0x92, 0xa4, 0xc3, 0x8d, 0xa1, 0x9f, 0xd0, 0x15, 0x11, 0xd1, 0x0b, 0x1a, 0xe6, 0x29,
|
||||
0x49, 0x3c, 0x57, 0x85, 0x18, 0xbe, 0x2a, 0xc4, 0x34, 0x25, 0x09, 0xde, 0xb5, 0x32, 0xb9, 0x92,
|
||||
0xb6, 0xcb, 0x31, 0x97, 0x34, 0x16, 0xc4, 0x83, 0x61, 0xe3, 0x04, 0xe1, 0x72, 0xf8, 0x43, 0x09,
|
||||
0x6e, 0xd0, 0xb4, 0xf5, 0xde, 0xb0, 0x21, 0xd3, 0x59, 0x54, 0xdb, 0x1f, 0x43, 0x3f, 0xe5, 0x79,
|
||||
0x54, 0x99, 0xda, 0x7d, 0x5d, 0x53, 0x56, 0x66, 0x4d, 0x95, 0x63, 0xb4, 0xa9, 0xbe, 0x36, 0x65,
|
||||
0xd1, 0xd2, 0x54, 0x49, 0xd3, 0xa6, 0xf6, 0xb4, 0x29, 0x8b, 0x2a, 0x53, 0xfe, 0xef, 0x0e, 0xb4,
|
||||
0xf5, 0x56, 0xe8, 0x03, 0x18, 0x2c, 0x0b, 0x56, 0xc4, 0xb7, 0x83, 0xe8, 0x6b, 0x76, 0xa7, 0xc2,
|
||||
0x75, 0x94, 0x53, 0xb8, 0xf7, 0x32, 0x75, 0xe3, 0xba, 0xed, 0xbf, 0x24, 0xd0, 0x6f, 0xe5, 0x18,
|
||||
0x7a, 0x45, 0x9a, 0xd2, 0x2c, 0x5c, 0xf0, 0x22, 0xb9, 0x34, 0x77, 0x0e, 0x14, 0x74, 0x2e, 0x91,
|
||||
0x8d, 0x5e, 0x68, 0xfc, 0xef, 0x5e, 0x80, 0xea, 0xc8, 0xe4, 0x45, 0xe4, 0x57, 0x57, 0x39, 0xd5,
|
||||
0x09, 0xee, 0x62, 0xb3, 0x92, 0x78, 0x4c, 0x93, 0x95, 0x58, 0xab, 0xdd, 0xfb, 0xd8, 0xac, 0xfc,
|
||||
0x5f, 0x1d, 0xe8, 0xda, 0xa1, 0xe8, 0x3e, 0xb4, 0x62, 0xd9, 0x8a, 0x9e, 0xa3, 0x5e, 0xd0, 0x71,
|
||||
0xbd, 0x87, 0xb2, 0x38, 0xb1, 0x66, 0xd7, 0x37, 0x0e, 0xfa, 0x1c, 0xdc, 0xb2, 0x75, 0x4d, 0xa8,
|
||||
0x83, 0x40, 0xf7, 0x72, 0x60, 0x7b, 0x39, 0x98, 0x59, 0x06, 0xae, 0xc8, 0xfe, 0x3f, 0x3b, 0xd0,
|
||||
0x9e, 0xa8, 0x96, 0x7f, 0x53, 0x47, 0x1f, 0x43, 0x6b, 0x25, 0x7b, 0xda, 0x94, 0xec, 0x3b, 0xf5,
|
||||
0x32, 0x55, 0xe5, 0x58, 0x33, 0xd1, 0x67, 0xd0, 0x59, 0xea, 0xee, 0x36, 0x66, 0x0f, 0xeb, 0x45,
|
||||
0xa6, 0xe0, 0xb1, 0x65, 0x4b, 0x61, 0xae, 0x8b, 0x55, 0xdd, 0x81, 0xad, 0x42, 0xd3, 0xbe, 0xd8,
|
||||
0xb2, 0xa5, 0xb0, 0xd0, 0x45, 0xa8, 0x4a, 0x63, 0xab, 0xd0, 0xb4, 0x25, 0xb6, 0x6c, 0xf4, 0x15,
|
||||
0xb8, 0x6b, 0xdb, 0x8f, 0xaa, 0x2c, 0xb6, 0x1e, 0x4c, 0x59, 0xa3, 0xb8, 0x52, 0xc8, 0x46, 0x2d,
|
||||
0xcf, 0x3a, 0x64, 0xb9, 0x6a, 0xa4, 0x06, 0xee, 0x95, 0xd8, 0x24, 0xf7, 0x7f, 0x73, 0x60, 0x57,
|
||||
0xbf, 0x81, 0x47, 0x84, 0x45, 0xf1, 0x75, 0xed, 0x27, 0x12, 0x41, 0x73, 0x4d, 0xe3, 0xd4, 0x7c,
|
||||
0x21, 0xd5, 0x6f, 0x74, 0x0a, 0x4d, 0xe9, 0x51, 0x1d, 0xe1, 0xde, 0xb6, 0x7f, 0xb8, 0x9e, 0x3c,
|
||||
0xbb, 0x4e, 0x29, 0x56, 0x6c, 0xd9, 0xb9, 0xfa, 0xab, 0xee, 0x35, 0x5f, 0xd5, 0xb9, 0x5a, 0x87,
|
||||
0x0d, 0xf7, 0xc3, 0x05, 0x40, 0x35, 0x09, 0xf5, 0xa0, 0xf3, 0xe0, 0xe9, 0xfc, 0xc9, 0x6c, 0x8c,
|
||||
0x07, 0x6f, 0x21, 0x17, 0x5a, 0x17, 0x67, 0xf3, 0x8b, 0xf1, 0xc0, 0x91, 0xf8, 0x74, 0x3e, 0x99,
|
||||
0x9c, 0xe1, 0xe7, 0x83, 0x1d, 0xb9, 0x98, 0x3f, 0x99, 0x3d, 0x7f, 0x36, 0x7e, 0x38, 0x68, 0xa0,
|
||||
0x3e, 0xb8, 0x8f, 0xbf, 0x9e, 0xce, 0x9e, 0x5e, 0xe0, 0xb3, 0xc9, 0xa0, 0x89, 0xde, 0x86, 0x3b,
|
||||
0x4a, 0x13, 0x56, 0x60, 0xeb, 0xdc, 0xff, 0xe3, 0xe6, 0xc8, 0xf9, 0xf3, 0xe6, 0xc8, 0xf9, 0xeb,
|
||||
0xe6, 0xc8, 0xf9, 0x6e, 0x3f, 0xe2, 0x61, 0x65, 0x2b, 0xd4, 0xb6, 0x16, 0x6d, 0x75, 0x9b, 0x3f,
|
||||
0xfd, 0x37, 0x00, 0x00, 0xff, 0xff, 0xf0, 0xbc, 0x25, 0x8b, 0xaf, 0x08, 0x00, 0x00,
|
||||
// 923 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x56, 0xdd, 0x8e, 0xdb, 0x44,
|
||||
0x18, 0xad, 0x1b, 0xe7, 0xc7, 0x5f, 0x36, 0xdb, 0x74, 0x88, 0x2a, 0x6b, 0x61, 0x37, 0xc1, 0x12,
|
||||
0xd2, 0x82, 0x50, 0x22, 0xa0, 0x08, 0x54, 0x40, 0x62, 0xb7, 0xdd, 0x6e, 0x51, 0x49, 0x5b, 0x26,
|
||||
0xc9, 0x45, 0xe1, 0xc2, 0x9a, 0x64, 0x67, 0x1d, 0x0b, 0xdb, 0x63, 0xec, 0x71, 0xc5, 0x72, 0xcf,
|
||||
0x25, 0xd7, 0xbc, 0x02, 0x4f, 0x82, 0x7a, 0xc9, 0x13, 0x20, 0xb4, 0xef, 0xc0, 0x3d, 0x9a, 0x3f,
|
||||
0x3b, 0x5b, 0x39, 0x85, 0x15, 0x77, 0x33, 0xc7, 0xe7, 0xfb, 0xe6, 0x9c, 0x99, 0xc9, 0x99, 0x80,
|
||||
0x17, 0xb2, 0x49, 0x9a, 0xb1, 0x98, 0xf2, 0x35, 0x2d, 0xf2, 0xc9, 0x2a, 0x0a, 0x69, 0xc2, 0x27,
|
||||
0x31, 0xe5, 0x59, 0xb8, 0xca, 0xc7, 0x69, 0xc6, 0x38, 0x43, 0x83, 0x90, 0x8d, 0x2b, 0xce, 0x58,
|
||||
0x71, 0xf6, 0x06, 0x01, 0x0b, 0x98, 0x24, 0x4c, 0xc4, 0x48, 0x71, 0xf7, 0x86, 0x01, 0x63, 0x41,
|
||||
0x44, 0x27, 0x72, 0xb6, 0x2c, 0xce, 0x27, 0x3c, 0x8c, 0x69, 0xce, 0x49, 0x9c, 0x2a, 0x82, 0xf7,
|
||||
0x31, 0x38, 0x5f, 0x93, 0x25, 0x8d, 0x9e, 0x91, 0x30, 0x43, 0x08, 0xec, 0x84, 0xc4, 0xd4, 0xb5,
|
||||
0x46, 0xd6, 0xa1, 0x83, 0xe5, 0x18, 0x0d, 0xa0, 0xf9, 0x82, 0x44, 0x05, 0x75, 0x6f, 0x4a, 0x50,
|
||||
0x4d, 0xbc, 0x7d, 0x68, 0x9e, 0x92, 0x22, 0xd8, 0xf8, 0x2c, 0x6a, 0x2c, 0xf3, 0xf9, 0x3b, 0x68,
|
||||
0xdf, 0x67, 0x45, 0xc2, 0x69, 0x56, 0x4f, 0x40, 0xf7, 0xa0, 0x43, 0x7f, 0xa4, 0x71, 0x1a, 0x91,
|
||||
0x4c, 0x36, 0xee, 0x7e, 0x78, 0x30, 0xae, 0xb3, 0x35, 0x3e, 0xd1, 0x2c, 0x5c, 0xf2, 0xbd, 0xcf,
|
||||
0xa1, 0xf3, 0x4d, 0x41, 0x12, 0x1e, 0x46, 0x14, 0xed, 0x41, 0xe7, 0x07, 0x3d, 0xd6, 0x0b, 0x94,
|
||||
0xf3, 0xab, 0xca, 0x4b, 0x69, 0xbf, 0x58, 0xd0, 0x9e, 0x15, 0x71, 0x4c, 0xb2, 0x0b, 0xf4, 0x36,
|
||||
0xec, 0xe4, 0x24, 0x4e, 0x23, 0xea, 0xaf, 0x84, 0x5a, 0xd9, 0xc1, 0xc6, 0x5d, 0x85, 0x49, 0x03,
|
||||
0x68, 0x1f, 0x40, 0x53, 0xf2, 0x22, 0xd6, 0x9d, 0x1c, 0x85, 0xcc, 0x8a, 0x18, 0x7d, 0xb9, 0xb1,
|
||||
0x7e, 0x63, 0xd4, 0xd8, 0xee, 0xc3, 0x28, 0x3e, 0xb6, 0x5f, 0xfe, 0x39, 0xbc, 0x51, 0xa9, 0xf4,
|
||||
0x86, 0xd0, 0x5e, 0x24, 0xfc, 0x22, 0xa5, 0x67, 0x5b, 0xf6, 0xf2, 0x6f, 0x1b, 0x9c, 0x47, 0x61,
|
||||
0xce, 0x59, 0x90, 0x91, 0xf8, 0xbf, 0x48, 0x7e, 0x1f, 0xd0, 0x26, 0xc5, 0x3f, 0x8f, 0x18, 0xe1,
|
||||
0xae, 0x2d, 0x7b, 0xf6, 0x37, 0x88, 0x0f, 0x05, 0xfe, 0x6f, 0x06, 0xef, 0x41, 0x6b, 0x59, 0xac,
|
||||
0xbe, 0xa7, 0x5c, 0xdb, 0x7b, 0xab, 0xde, 0xde, 0xb1, 0xe4, 0x68, 0x73, 0xba, 0x02, 0xdd, 0x81,
|
||||
0x56, 0xbe, 0x5a, 0xd3, 0x98, 0xb8, 0xcd, 0x91, 0x75, 0x78, 0x1b, 0xeb, 0x19, 0x7a, 0x07, 0x76,
|
||||
0x7f, 0xa2, 0x19, 0xf3, 0xf9, 0x3a, 0xa3, 0xf9, 0x9a, 0x45, 0x67, 0x6e, 0x4b, 0x2e, 0xdb, 0x13,
|
||||
0xe8, 0xdc, 0x80, 0x42, 0x99, 0xa4, 0x29, 0xa3, 0x6d, 0x69, 0xd4, 0x11, 0x88, 0xb2, 0x79, 0x08,
|
||||
0xfd, 0xea, 0xb3, 0x36, 0xd9, 0x91, 0x7d, 0x76, 0x4b, 0x92, 0xb2, 0xf8, 0x18, 0x7a, 0x09, 0x0d,
|
||||
0x08, 0x0f, 0x5f, 0x50, 0x3f, 0x4f, 0x49, 0xe2, 0x3a, 0xd2, 0xca, 0xe8, 0x75, 0x56, 0x66, 0x29,
|
||||
0x49, 0xb4, 0x9d, 0x1d, 0x53, 0x2c, 0x30, 0x21, 0xbe, 0x6c, 0x76, 0x46, 0x23, 0x4e, 0x5c, 0x18,
|
||||
0x35, 0x0e, 0x11, 0x2e, 0x97, 0x78, 0x20, 0xc0, 0x2b, 0x34, 0x65, 0xa0, 0x3b, 0x6a, 0x08, 0x8f,
|
||||
0x06, 0x55, 0x26, 0x1e, 0x43, 0x2f, 0x65, 0x79, 0x58, 0x49, 0xdb, 0xb9, 0x9e, 0x34, 0x53, 0x6c,
|
||||
0xa4, 0x95, 0xcd, 0x94, 0xb4, 0x9e, 0x92, 0x66, 0xd0, 0x52, 0x5a, 0x49, 0x53, 0xd2, 0x76, 0x95,
|
||||
0x34, 0x83, 0x4a, 0x69, 0xde, 0xef, 0x16, 0xb4, 0xd4, 0x82, 0xe8, 0x5d, 0xe8, 0xaf, 0x8a, 0xb8,
|
||||
0x88, 0x36, 0xed, 0xa8, 0x8b, 0x77, 0xab, 0xc2, 0x95, 0xa1, 0xbb, 0x70, 0xe7, 0x55, 0xea, 0x95,
|
||||
0x0b, 0x38, 0x78, 0xa5, 0x40, 0x9d, 0xd0, 0x10, 0xba, 0x45, 0x9a, 0xd2, 0xcc, 0x5f, 0xb2, 0x22,
|
||||
0x39, 0xd3, 0xb7, 0x10, 0x24, 0x74, 0x2c, 0x90, 0x2b, 0x79, 0xd1, 0xb8, 0x76, 0x5e, 0x40, 0xb5,
|
||||
0x71, 0xe2, 0x52, 0xb2, 0xf3, 0xf3, 0x9c, 0x2a, 0x07, 0xb7, 0xb1, 0x9e, 0x09, 0x3c, 0xa2, 0x49,
|
||||
0xc0, 0xd7, 0x72, 0xf5, 0x1e, 0xd6, 0x33, 0xef, 0x57, 0x0b, 0x3a, 0xa6, 0x29, 0xfa, 0x0c, 0x9a,
|
||||
0x91, 0x48, 0x4b, 0xd7, 0x92, 0xc7, 0x34, 0xac, 0xd7, 0x50, 0x06, 0xaa, 0x3e, 0x25, 0x55, 0x53,
|
||||
0x9f, 0x47, 0xe8, 0x53, 0x70, 0xca, 0x4c, 0xd6, 0xd6, 0xf6, 0xc6, 0x2a, 0xb5, 0xc7, 0x26, 0xb5,
|
||||
0xc7, 0x73, 0xc3, 0xc0, 0x15, 0xd9, 0xfb, 0xb9, 0x01, 0xad, 0xa9, 0x7c, 0x19, 0xfe, 0x9f, 0xae,
|
||||
0x0f, 0xa0, 0x19, 0x88, 0x2c, 0xd7, 0x41, 0xfc, 0x66, 0x7d, 0xb1, 0x8c, 0x7b, 0xac, 0x98, 0xe8,
|
||||
0x13, 0x68, 0xaf, 0x54, 0xbe, 0x6b, 0xc9, 0xfb, 0xf5, 0x45, 0xfa, 0x11, 0xc0, 0x86, 0x2d, 0x0a,
|
||||
0x73, 0x15, 0xbe, 0xf2, 0x3e, 0x6c, 0x2d, 0xd4, 0x09, 0x8d, 0x0d, 0x5b, 0x14, 0x16, 0x2a, 0x26,
|
||||
0x65, 0x98, 0x6c, 0x2d, 0xd4, 0x59, 0x8a, 0x0d, 0x1b, 0x7d, 0x01, 0xce, 0xda, 0xa4, 0xa7, 0x0c,
|
||||
0x91, 0xad, 0xdb, 0x53, 0x86, 0x2c, 0xae, 0x2a, 0x44, 0xde, 0x96, 0x3b, 0xee, 0xc7, 0xb9, 0x4c,
|
||||
0xaa, 0x06, 0xee, 0x96, 0xd8, 0x34, 0xf7, 0x7e, 0xb3, 0x60, 0x47, 0x9d, 0xc3, 0x43, 0x12, 0x87,
|
||||
0xd1, 0x45, 0xed, 0x33, 0x8a, 0xc0, 0x5e, 0xd3, 0x28, 0xd5, 0xaf, 0xa8, 0x1c, 0xa3, 0xbb, 0x60,
|
||||
0x0b, 0x8d, 0x72, 0x0b, 0x77, 0xb7, 0xfd, 0xe6, 0x55, 0xe7, 0xf9, 0x45, 0x4a, 0xb1, 0x64, 0x8b,
|
||||
0x44, 0x56, 0xff, 0x07, 0x5c, 0xfb, 0x75, 0x89, 0xac, 0xea, 0x4c, 0x22, 0xab, 0x8a, 0xf7, 0x96,
|
||||
0x00, 0x55, 0x3f, 0xd4, 0x85, 0xf6, 0xfd, 0xa7, 0x8b, 0x27, 0xf3, 0x13, 0xdc, 0xbf, 0x81, 0x1c,
|
||||
0x68, 0x9e, 0x1e, 0x2d, 0x4e, 0x4f, 0xfa, 0x96, 0xc0, 0x67, 0x8b, 0xe9, 0xf4, 0x08, 0x3f, 0xef,
|
||||
0xdf, 0x14, 0x93, 0xc5, 0x93, 0xf9, 0xf3, 0x67, 0x27, 0x0f, 0xfa, 0x0d, 0xd4, 0x03, 0xe7, 0xd1,
|
||||
0x57, 0xb3, 0xf9, 0xd3, 0x53, 0x7c, 0x34, 0xed, 0xdb, 0xe8, 0x0d, 0xb8, 0x25, 0x6b, 0xfc, 0x0a,
|
||||
0x6c, 0x1e, 0x7b, 0x2f, 0x2f, 0x0f, 0xac, 0x3f, 0x2e, 0x0f, 0xac, 0xbf, 0x2e, 0x0f, 0xac, 0x6f,
|
||||
0x07, 0x21, 0xf3, 0x2b, 0x71, 0xbe, 0x12, 0xb7, 0x6c, 0xc9, 0x9b, 0xfd, 0xd1, 0x3f, 0x01, 0x00,
|
||||
0x00, 0xff, 0xff, 0x52, 0x2d, 0xb5, 0x31, 0xef, 0x08, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *LabelPair) Marshal() (dAtA []byte, err error) {
|
||||
|
@ -2496,7 +2499,7 @@ func (m *Summary) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Quantile = append(m.Quantile, &Quantile{})
|
||||
m.Quantile = append(m.Quantile, Quantile{})
|
||||
if err := m.Quantile[len(m.Quantile)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -2673,7 +2676,7 @@ func (m *Histogram) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Bucket = append(m.Bucket, &Bucket{})
|
||||
m.Bucket = append(m.Bucket, Bucket{})
|
||||
if err := m.Bucket[len(m.Bucket)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -2780,7 +2783,7 @@ func (m *Histogram) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.NegativeSpan = append(m.NegativeSpan, &BucketSpan{})
|
||||
m.NegativeSpan = append(m.NegativeSpan, BucketSpan{})
|
||||
if err := m.NegativeSpan[len(m.NegativeSpan)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -2946,7 +2949,7 @@ func (m *Histogram) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PositiveSpan = append(m.PositiveSpan, &BucketSpan{})
|
||||
m.PositiveSpan = append(m.PositiveSpan, BucketSpan{})
|
||||
if err := m.PositiveSpan[len(m.PositiveSpan)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -3382,7 +3385,7 @@ func (m *Exemplar) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Label = append(m.Label, &LabelPair{})
|
||||
m.Label = append(m.Label, LabelPair{})
|
||||
if err := m.Label[len(m.Label)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -3514,7 +3517,7 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Label = append(m.Label, &LabelPair{})
|
||||
m.Label = append(m.Label, LabelPair{})
|
||||
if err := m.Label[len(m.Label)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -3881,7 +3884,7 @@ func (m *MetricFamily) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Metric = append(m.Metric, &Metric{})
|
||||
m.Metric = append(m.Metric, Metric{})
|
||||
if err := m.Metric[len(m.Metric)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -21,6 +21,7 @@ syntax = "proto3";
|
|||
package io.prometheus.client;
|
||||
option go_package = "io_prometheus_client";
|
||||
|
||||
import "gogoproto/gogo.proto";
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
message LabelPair {
|
||||
|
@ -58,9 +59,9 @@ message Quantile {
|
|||
}
|
||||
|
||||
message Summary {
|
||||
uint64 sample_count = 1;
|
||||
double sample_sum = 2;
|
||||
repeated Quantile quantile = 3;
|
||||
uint64 sample_count = 1;
|
||||
double sample_sum = 2;
|
||||
repeated Quantile quantile = 3 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message Untyped {
|
||||
|
@ -72,7 +73,7 @@ message Histogram {
|
|||
double sample_count_float = 4; // Overrides sample_count if > 0.
|
||||
double sample_sum = 2;
|
||||
// Buckets for the conventional histogram.
|
||||
repeated Bucket bucket = 3; // Ordered in increasing order of upper_bound, +Inf bucket is optional.
|
||||
repeated Bucket bucket = 3 [(gogoproto.nullable) = false]; // Ordered in increasing order of upper_bound, +Inf bucket is optional.
|
||||
|
||||
// Everything below here is for native histograms (also known as sparse histograms).
|
||||
// Native histograms are an experimental feature without stability guarantees.
|
||||
|
@ -88,20 +89,20 @@ message Histogram {
|
|||
double zero_count_float = 8; // Overrides sb_zero_count if > 0.
|
||||
|
||||
// Negative buckets for the native histogram.
|
||||
repeated BucketSpan negative_span = 9;
|
||||
repeated BucketSpan negative_span = 9 [(gogoproto.nullable) = false];
|
||||
// Use either "negative_delta" or "negative_count", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
repeated sint64 negative_delta = 10; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||
repeated double negative_count = 11; // Absolute count of each bucket.
|
||||
repeated sint64 negative_delta = 10; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||
repeated double negative_count = 11; // Absolute count of each bucket.
|
||||
|
||||
// Positive buckets for the native histogram.
|
||||
repeated BucketSpan positive_span = 12;
|
||||
repeated BucketSpan positive_span = 12 [(gogoproto.nullable) = false];
|
||||
// Use either "positive_delta" or "positive_count", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
repeated sint64 positive_delta = 13; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||
repeated double positive_count = 14; // Absolute count of each bucket.
|
||||
repeated sint64 positive_delta = 13; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||
repeated double positive_count = 14; // Absolute count of each bucket.
|
||||
}
|
||||
|
||||
message Bucket {
|
||||
|
@ -123,24 +124,24 @@ message BucketSpan {
|
|||
}
|
||||
|
||||
message Exemplar {
|
||||
repeated LabelPair label = 1;
|
||||
repeated LabelPair label = 1 [(gogoproto.nullable) = false];
|
||||
double value = 2;
|
||||
google.protobuf.Timestamp timestamp = 3; // OpenMetrics-style.
|
||||
}
|
||||
|
||||
message Metric {
|
||||
repeated LabelPair label = 1;
|
||||
Gauge gauge = 2;
|
||||
Counter counter = 3;
|
||||
Summary summary = 4;
|
||||
Untyped untyped = 5;
|
||||
Histogram histogram = 7;
|
||||
int64 timestamp_ms = 6;
|
||||
repeated LabelPair label = 1 [(gogoproto.nullable) = false];
|
||||
Gauge gauge = 2;
|
||||
Counter counter = 3;
|
||||
Summary summary = 4;
|
||||
Untyped untyped = 5;
|
||||
Histogram histogram = 7;
|
||||
int64 timestamp_ms = 6;
|
||||
}
|
||||
|
||||
message MetricFamily {
|
||||
string name = 1;
|
||||
string help = 2;
|
||||
MetricType type = 3;
|
||||
repeated Metric metric = 4;
|
||||
repeated Metric metric = 4 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
|
|
@ -383,14 +383,14 @@ type Histogram struct {
|
|||
// *Histogram_ZeroCountFloat
|
||||
ZeroCount isHistogram_ZeroCount `protobuf_oneof:"zero_count"`
|
||||
// Negative Buckets.
|
||||
NegativeSpans []*BucketSpan `protobuf:"bytes,8,rep,name=negative_spans,json=negativeSpans,proto3" json:"negative_spans,omitempty"`
|
||||
NegativeSpans []BucketSpan `protobuf:"bytes,8,rep,name=negative_spans,json=negativeSpans,proto3" json:"negative_spans"`
|
||||
// Use either "negative_deltas" or "negative_counts", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
NegativeDeltas []int64 `protobuf:"zigzag64,9,rep,packed,name=negative_deltas,json=negativeDeltas,proto3" json:"negative_deltas,omitempty"`
|
||||
NegativeCounts []float64 `protobuf:"fixed64,10,rep,packed,name=negative_counts,json=negativeCounts,proto3" json:"negative_counts,omitempty"`
|
||||
// Positive Buckets.
|
||||
PositiveSpans []*BucketSpan `protobuf:"bytes,11,rep,name=positive_spans,json=positiveSpans,proto3" json:"positive_spans,omitempty"`
|
||||
PositiveSpans []BucketSpan `protobuf:"bytes,11,rep,name=positive_spans,json=positiveSpans,proto3" json:"positive_spans"`
|
||||
// Use either "positive_deltas" or "positive_counts", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
|
@ -529,7 +529,7 @@ func (m *Histogram) GetZeroCountFloat() float64 {
|
|||
return 0
|
||||
}
|
||||
|
||||
func (m *Histogram) GetNegativeSpans() []*BucketSpan {
|
||||
func (m *Histogram) GetNegativeSpans() []BucketSpan {
|
||||
if m != nil {
|
||||
return m.NegativeSpans
|
||||
}
|
||||
|
@ -550,7 +550,7 @@ func (m *Histogram) GetNegativeCounts() []float64 {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (m *Histogram) GetPositiveSpans() []*BucketSpan {
|
||||
func (m *Histogram) GetPositiveSpans() []BucketSpan {
|
||||
if m != nil {
|
||||
return m.PositiveSpans
|
||||
}
|
||||
|
@ -1143,75 +1143,75 @@ func init() {
|
|||
func init() { proto.RegisterFile("types.proto", fileDescriptor_d938547f84707355) }
|
||||
|
||||
var fileDescriptor_d938547f84707355 = []byte{
|
||||
// 1075 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x56, 0xdd, 0x8e, 0xdb, 0x44,
|
||||
0x14, 0x5e, 0xdb, 0x89, 0x13, 0x9f, 0xfc, 0xd4, 0x3b, 0xda, 0x16, 0x53, 0xd1, 0x6d, 0xb0, 0x54,
|
||||
0x88, 0x10, 0xca, 0xaa, 0x85, 0x0b, 0x2a, 0x0a, 0xd2, 0x6e, 0xc9, 0xfe, 0x88, 0x26, 0x51, 0x27,
|
||||
0x59, 0x41, 0xb9, 0x89, 0x66, 0x93, 0xd9, 0xc4, 0xaa, 0xff, 0xf0, 0x4c, 0xaa, 0x0d, 0xef, 0xc1,
|
||||
0x1d, 0x2f, 0xc1, 0x3d, 0x12, 0xb7, 0xbd, 0xe4, 0x09, 0x10, 0xda, 0x2b, 0x1e, 0x03, 0xcd, 0xb1,
|
||||
0x1d, 0x3b, 0xdd, 0x82, 0x54, 0xee, 0xe6, 0x7c, 0xe7, 0x3b, 0x33, 0x9f, 0xe7, 0xfc, 0x8c, 0xa1,
|
||||
0x21, 0xd7, 0x31, 0x17, 0xbd, 0x38, 0x89, 0x64, 0x44, 0x20, 0x4e, 0xa2, 0x80, 0xcb, 0x25, 0x5f,
|
||||
0x89, 0xbb, 0x7b, 0x8b, 0x68, 0x11, 0x21, 0x7c, 0xa0, 0x56, 0x29, 0xc3, 0xfd, 0x45, 0x87, 0xf6,
|
||||
0x80, 0xcb, 0xc4, 0x9b, 0x0d, 0xb8, 0x64, 0x73, 0x26, 0x19, 0x79, 0x0c, 0x15, 0xb5, 0x87, 0xa3,
|
||||
0x75, 0xb4, 0x6e, 0xfb, 0xd1, 0x83, 0x5e, 0xb1, 0x47, 0x6f, 0x9b, 0x99, 0x99, 0x93, 0x75, 0xcc,
|
||||
0x29, 0x86, 0x90, 0x4f, 0x81, 0x04, 0x88, 0x4d, 0x2f, 0x59, 0xe0, 0xf9, 0xeb, 0x69, 0xc8, 0x02,
|
||||
0xee, 0xe8, 0x1d, 0xad, 0x6b, 0x51, 0x3b, 0xf5, 0x1c, 0xa3, 0x63, 0xc8, 0x02, 0x4e, 0x08, 0x54,
|
||||
0x96, 0xdc, 0x8f, 0x9d, 0x0a, 0xfa, 0x71, 0xad, 0xb0, 0x55, 0xe8, 0x49, 0xa7, 0x9a, 0x62, 0x6a,
|
||||
0xed, 0xae, 0x01, 0x8a, 0x93, 0x48, 0x03, 0x6a, 0xe7, 0xc3, 0x6f, 0x87, 0xa3, 0xef, 0x86, 0xf6,
|
||||
0x8e, 0x32, 0x9e, 0x8e, 0xce, 0x87, 0x93, 0x3e, 0xb5, 0x35, 0x62, 0x41, 0xf5, 0xe4, 0xf0, 0xfc,
|
||||
0xa4, 0x6f, 0xeb, 0xa4, 0x05, 0xd6, 0xe9, 0xd9, 0x78, 0x32, 0x3a, 0xa1, 0x87, 0x03, 0xdb, 0x20,
|
||||
0x04, 0xda, 0xe8, 0x29, 0xb0, 0x8a, 0x0a, 0x1d, 0x9f, 0x0f, 0x06, 0x87, 0xf4, 0x85, 0x5d, 0x25,
|
||||
0x75, 0xa8, 0x9c, 0x0d, 0x8f, 0x47, 0xb6, 0x49, 0x9a, 0x50, 0x1f, 0x4f, 0x0e, 0x27, 0xfd, 0x71,
|
||||
0x7f, 0x62, 0xd7, 0xdc, 0x27, 0x60, 0x8e, 0x59, 0x10, 0xfb, 0x9c, 0xec, 0x41, 0xf5, 0x15, 0xf3,
|
||||
0x57, 0xe9, 0xb5, 0x68, 0x34, 0x35, 0xc8, 0x07, 0x60, 0x49, 0x2f, 0xe0, 0x42, 0xb2, 0x20, 0xc6,
|
||||
0xef, 0x34, 0x68, 0x01, 0xb8, 0x11, 0xd4, 0xfb, 0x57, 0x3c, 0x88, 0x7d, 0x96, 0x90, 0x03, 0x30,
|
||||
0x7d, 0x76, 0xc1, 0x7d, 0xe1, 0x68, 0x1d, 0xa3, 0xdb, 0x78, 0xb4, 0x5b, 0xbe, 0xd7, 0x67, 0xca,
|
||||
0x73, 0x54, 0x79, 0xfd, 0xe7, 0xfd, 0x1d, 0x9a, 0xd1, 0x8a, 0x03, 0xf5, 0x7f, 0x3d, 0xd0, 0x78,
|
||||
0xf3, 0xc0, 0xdf, 0xab, 0x60, 0x9d, 0x7a, 0x42, 0x46, 0x8b, 0x84, 0x05, 0xe4, 0x1e, 0x58, 0xb3,
|
||||
0x68, 0x15, 0xca, 0xa9, 0x17, 0x4a, 0x94, 0x5d, 0x39, 0xdd, 0xa1, 0x75, 0x84, 0xce, 0x42, 0x49,
|
||||
0x3e, 0x84, 0x46, 0xea, 0xbe, 0xf4, 0x23, 0x26, 0xd3, 0x63, 0x4e, 0x77, 0x28, 0x20, 0x78, 0xac,
|
||||
0x30, 0x62, 0x83, 0x21, 0x56, 0x01, 0x9e, 0xa3, 0x51, 0xb5, 0x24, 0x77, 0xc0, 0x14, 0xb3, 0x25,
|
||||
0x0f, 0x18, 0x66, 0x6d, 0x97, 0x66, 0x16, 0x79, 0x00, 0xed, 0x9f, 0x78, 0x12, 0x4d, 0xe5, 0x32,
|
||||
0xe1, 0x62, 0x19, 0xf9, 0x73, 0xcc, 0xa0, 0x46, 0x5b, 0x0a, 0x9d, 0xe4, 0x20, 0xf9, 0x28, 0xa3,
|
||||
0x15, 0xba, 0x4c, 0xd4, 0xa5, 0xd1, 0xa6, 0xc2, 0x9f, 0xe6, 0xda, 0x3e, 0x01, 0xbb, 0xc4, 0x4b,
|
||||
0x05, 0xd6, 0x50, 0xa0, 0x46, 0xdb, 0x1b, 0x66, 0x2a, 0xf2, 0x2b, 0x68, 0x87, 0x7c, 0xc1, 0xa4,
|
||||
0xf7, 0x8a, 0x4f, 0x45, 0xcc, 0x42, 0xe1, 0xd4, 0xf1, 0x86, 0xef, 0x94, 0x6f, 0xf8, 0x68, 0x35,
|
||||
0x7b, 0xc9, 0xe5, 0x38, 0x66, 0x21, 0x6d, 0xe5, 0x6c, 0x65, 0x09, 0xf2, 0x31, 0xdc, 0xda, 0x84,
|
||||
0xcf, 0xb9, 0x2f, 0x99, 0x70, 0xac, 0x8e, 0xd1, 0x25, 0x74, 0xb3, 0xeb, 0x37, 0x88, 0x6e, 0x11,
|
||||
0x51, 0x97, 0x70, 0xa0, 0x63, 0x74, 0xb5, 0x82, 0x88, 0xa2, 0x84, 0x12, 0x14, 0x47, 0xc2, 0x2b,
|
||||
0x09, 0x6a, 0xfc, 0xb7, 0xa0, 0x9c, 0xbd, 0x11, 0xb4, 0x09, 0xcf, 0x04, 0x35, 0x53, 0x41, 0x39,
|
||||
0x5c, 0x08, 0xda, 0x10, 0x33, 0x41, 0xad, 0x54, 0x50, 0x0e, 0x67, 0x82, 0xbe, 0x06, 0x48, 0xb8,
|
||||
0xe0, 0x72, 0xba, 0x54, 0x37, 0xde, 0xc6, 0xbe, 0xbe, 0x5f, 0x16, 0xb3, 0xa9, 0x99, 0x1e, 0x55,
|
||||
0xbc, 0x53, 0x2f, 0x94, 0xd4, 0x4a, 0xf2, 0xe5, 0x76, 0xd1, 0xdd, 0x7a, 0xb3, 0xe8, 0x3e, 0x07,
|
||||
0x6b, 0x13, 0xb5, 0xdd, 0x9d, 0x35, 0x30, 0x5e, 0xf4, 0xc7, 0xb6, 0x46, 0x4c, 0xd0, 0x87, 0x23,
|
||||
0x5b, 0x2f, 0x3a, 0xd4, 0x38, 0xaa, 0x41, 0x15, 0x35, 0x1f, 0x35, 0x01, 0x8a, 0x54, 0xbb, 0x4f,
|
||||
0x00, 0x8a, 0x9b, 0x51, 0xd5, 0x16, 0x5d, 0x5e, 0x0a, 0x9e, 0x96, 0xef, 0x2e, 0xcd, 0x2c, 0x85,
|
||||
0xfb, 0x3c, 0x5c, 0xc8, 0x25, 0x56, 0x6d, 0x8b, 0x66, 0x96, 0xfb, 0xb7, 0x06, 0x30, 0xf1, 0x02,
|
||||
0x3e, 0xe6, 0x89, 0xc7, 0xc5, 0xbb, 0xf7, 0xdc, 0x23, 0xa8, 0x09, 0x6c, 0x77, 0xe1, 0xe8, 0x18,
|
||||
0x41, 0xca, 0x11, 0xe9, 0x24, 0xc8, 0x42, 0x72, 0x22, 0xf9, 0x02, 0x2c, 0x9e, 0x35, 0xb9, 0x70,
|
||||
0x0c, 0x8c, 0xda, 0x2b, 0x47, 0xe5, 0x13, 0x20, 0x8b, 0x2b, 0xc8, 0xe4, 0x4b, 0x80, 0x65, 0x7e,
|
||||
0xf1, 0xc2, 0xa9, 0x60, 0xe8, 0xed, 0xb7, 0xa6, 0x25, 0x8b, 0x2d, 0xd1, 0xdd, 0x87, 0x50, 0xc5,
|
||||
0x2f, 0x50, 0x13, 0x13, 0xa7, 0xac, 0x96, 0x4e, 0x4c, 0xb5, 0xde, 0x9e, 0x1d, 0x56, 0x36, 0x3b,
|
||||
0xdc, 0xc7, 0x60, 0x3e, 0x4b, 0xbf, 0xf3, 0x5d, 0x2f, 0xc6, 0xfd, 0x59, 0x83, 0x26, 0xe2, 0x03,
|
||||
0x26, 0x67, 0x4b, 0x9e, 0x90, 0x87, 0x5b, 0x8f, 0xc4, 0xbd, 0x1b, 0xf1, 0x19, 0xaf, 0x57, 0x7a,
|
||||
0x1c, 0x72, 0xa1, 0xfa, 0xdb, 0x84, 0x1a, 0x65, 0xa1, 0x5d, 0xa8, 0xe0, 0xa8, 0x37, 0x41, 0xef,
|
||||
0x3f, 0x4f, 0xeb, 0x68, 0xd8, 0x7f, 0x9e, 0xd6, 0x11, 0x55, 0xe3, 0x5d, 0x01, 0xb4, 0x6f, 0x1b,
|
||||
0xee, 0xaf, 0x9a, 0x2a, 0x3e, 0x36, 0x57, 0xb5, 0x27, 0xc8, 0x7b, 0x50, 0x13, 0x92, 0xc7, 0xd3,
|
||||
0x40, 0xa0, 0x2e, 0x83, 0x9a, 0xca, 0x1c, 0x08, 0x75, 0xf4, 0xe5, 0x2a, 0x9c, 0xe5, 0x47, 0xab,
|
||||
0x35, 0x79, 0x1f, 0xea, 0x42, 0xb2, 0x44, 0x2a, 0x76, 0x3a, 0x48, 0x6b, 0x68, 0x0f, 0x04, 0xb9,
|
||||
0x0d, 0x26, 0x0f, 0xe7, 0x53, 0x4c, 0x8a, 0x72, 0x54, 0x79, 0x38, 0x1f, 0x08, 0x72, 0x17, 0xea,
|
||||
0x8b, 0x24, 0x5a, 0xc5, 0x5e, 0xb8, 0x70, 0xaa, 0x1d, 0xa3, 0x6b, 0xd1, 0x8d, 0x4d, 0xda, 0xa0,
|
||||
0x5f, 0xac, 0x71, 0x98, 0xd5, 0xa9, 0x7e, 0xb1, 0x56, 0xbb, 0x27, 0x2c, 0x5c, 0x70, 0xb5, 0x49,
|
||||
0x2d, 0xdd, 0x1d, 0xed, 0x81, 0x70, 0x7f, 0xd3, 0xa0, 0xfa, 0x74, 0xb9, 0x0a, 0x5f, 0x92, 0x7d,
|
||||
0x68, 0x04, 0x5e, 0x38, 0x55, 0xad, 0x54, 0x68, 0xb6, 0x02, 0x2f, 0x54, 0x35, 0x3c, 0x10, 0xe8,
|
||||
0x67, 0x57, 0x1b, 0x7f, 0xf6, 0xbe, 0x04, 0xec, 0x2a, 0xf3, 0xf7, 0xb2, 0x24, 0x18, 0x98, 0x84,
|
||||
0xbb, 0xe5, 0x24, 0xe0, 0x01, 0xbd, 0x7e, 0x38, 0x8b, 0xe6, 0x5e, 0xb8, 0x28, 0x32, 0xa0, 0xde,
|
||||
0x6d, 0xfc, 0xaa, 0x26, 0xc5, 0xb5, 0x7b, 0x00, 0xf5, 0x9c, 0x75, 0xa3, 0x79, 0xbf, 0x1f, 0xa9,
|
||||
0x67, 0x75, 0xeb, 0x2d, 0xd5, 0xdd, 0x1f, 0xa1, 0x85, 0x9b, 0xf3, 0xf9, 0xff, 0xed, 0xb2, 0x03,
|
||||
0x30, 0x67, 0x6a, 0x87, 0xbc, 0xc9, 0x76, 0x6f, 0x08, 0xcf, 0x03, 0x52, 0xda, 0xd1, 0xde, 0xeb,
|
||||
0xeb, 0x7d, 0xed, 0x8f, 0xeb, 0x7d, 0xed, 0xaf, 0xeb, 0x7d, 0xed, 0x07, 0x53, 0xb1, 0xe3, 0x8b,
|
||||
0x0b, 0x13, 0xff, 0x60, 0x3e, 0xfb, 0x27, 0x00, 0x00, 0xff, 0xff, 0x36, 0xd7, 0x1e, 0xb4, 0xf2,
|
||||
0x08, 0x00, 0x00,
|
||||
// 1081 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x56, 0xdb, 0x6e, 0xdb, 0x46,
|
||||
0x13, 0x36, 0x49, 0x89, 0x12, 0x47, 0x87, 0xd0, 0x0b, 0x27, 0x3f, 0xff, 0xa0, 0x71, 0x54, 0x02,
|
||||
0x69, 0x85, 0xa2, 0x90, 0x91, 0xb4, 0x17, 0x0d, 0x1a, 0x14, 0xb0, 0x5d, 0xf9, 0x80, 0x46, 0x12,
|
||||
0xb2, 0x92, 0xd1, 0xa6, 0x37, 0xc2, 0x5a, 0x5a, 0x4b, 0x44, 0x78, 0x2a, 0x77, 0x15, 0x58, 0x7d,
|
||||
0x8f, 0xde, 0xf5, 0x25, 0x7a, 0xdf, 0x07, 0x08, 0xd0, 0x9b, 0x3e, 0x41, 0x51, 0xf8, 0xaa, 0x8f,
|
||||
0x51, 0xec, 0x90, 0x14, 0xa9, 0x38, 0x05, 0x9a, 0xde, 0xed, 0x7c, 0xf3, 0xcd, 0xec, 0xc7, 0xdd,
|
||||
0x99, 0x59, 0x42, 0x43, 0xae, 0x63, 0x2e, 0x7a, 0x71, 0x12, 0xc9, 0x88, 0x40, 0x9c, 0x44, 0x01,
|
||||
0x97, 0x4b, 0xbe, 0x12, 0xf7, 0xf7, 0x16, 0xd1, 0x22, 0x42, 0xf8, 0x40, 0xad, 0x52, 0x86, 0xfb,
|
||||
0xb3, 0x0e, 0xed, 0x01, 0x97, 0x89, 0x37, 0x1b, 0x70, 0xc9, 0xe6, 0x4c, 0x32, 0xf2, 0x14, 0x2a,
|
||||
0x2a, 0x87, 0xa3, 0x75, 0xb4, 0x6e, 0xfb, 0xc9, 0xa3, 0x5e, 0x91, 0xa3, 0xb7, 0xcd, 0xcc, 0xcc,
|
||||
0xc9, 0x3a, 0xe6, 0x14, 0x43, 0xc8, 0xa7, 0x40, 0x02, 0xc4, 0xa6, 0x57, 0x2c, 0xf0, 0xfc, 0xf5,
|
||||
0x34, 0x64, 0x01, 0x77, 0xf4, 0x8e, 0xd6, 0xb5, 0xa8, 0x9d, 0x7a, 0x4e, 0xd0, 0x31, 0x64, 0x01,
|
||||
0x27, 0x04, 0x2a, 0x4b, 0xee, 0xc7, 0x4e, 0x05, 0xfd, 0xb8, 0x56, 0xd8, 0x2a, 0xf4, 0xa4, 0x53,
|
||||
0x4d, 0x31, 0xb5, 0x76, 0xd7, 0x00, 0xc5, 0x4e, 0xa4, 0x01, 0xb5, 0x8b, 0xe1, 0x37, 0xc3, 0xd1,
|
||||
0xb7, 0x43, 0x7b, 0x47, 0x19, 0xc7, 0xa3, 0x8b, 0xe1, 0xa4, 0x4f, 0x6d, 0x8d, 0x58, 0x50, 0x3d,
|
||||
0x3d, 0xbc, 0x38, 0xed, 0xdb, 0x3a, 0x69, 0x81, 0x75, 0x76, 0x3e, 0x9e, 0x8c, 0x4e, 0xe9, 0xe1,
|
||||
0xc0, 0x36, 0x08, 0x81, 0x36, 0x7a, 0x0a, 0xac, 0xa2, 0x42, 0xc7, 0x17, 0x83, 0xc1, 0x21, 0x7d,
|
||||
0x69, 0x57, 0x49, 0x1d, 0x2a, 0xe7, 0xc3, 0x93, 0x91, 0x6d, 0x92, 0x26, 0xd4, 0xc7, 0x93, 0xc3,
|
||||
0x49, 0x7f, 0xdc, 0x9f, 0xd8, 0x35, 0xf7, 0x19, 0x98, 0x63, 0x16, 0xc4, 0x3e, 0x27, 0x7b, 0x50,
|
||||
0x7d, 0xcd, 0xfc, 0x55, 0x7a, 0x2c, 0x1a, 0x4d, 0x0d, 0xf2, 0x01, 0x58, 0xd2, 0x0b, 0xb8, 0x90,
|
||||
0x2c, 0x88, 0xf1, 0x3b, 0x0d, 0x5a, 0x00, 0x6e, 0x04, 0xf5, 0xfe, 0x35, 0x0f, 0x62, 0x9f, 0x25,
|
||||
0xe4, 0x00, 0x4c, 0x9f, 0x5d, 0x72, 0x5f, 0x38, 0x5a, 0xc7, 0xe8, 0x36, 0x9e, 0xec, 0x96, 0xcf,
|
||||
0xf5, 0xb9, 0xf2, 0x1c, 0x55, 0xde, 0xfc, 0xf1, 0x70, 0x87, 0x66, 0xb4, 0x62, 0x43, 0xfd, 0x1f,
|
||||
0x37, 0x34, 0xde, 0xde, 0xf0, 0xb7, 0x2a, 0x58, 0x67, 0x9e, 0x90, 0xd1, 0x22, 0x61, 0x01, 0x79,
|
||||
0x00, 0xd6, 0x2c, 0x5a, 0x85, 0x72, 0xea, 0x85, 0x12, 0x65, 0x57, 0xce, 0x76, 0x68, 0x1d, 0xa1,
|
||||
0xf3, 0x50, 0x92, 0x0f, 0xa1, 0x91, 0xba, 0xaf, 0xfc, 0x88, 0xc9, 0x74, 0x9b, 0xb3, 0x1d, 0x0a,
|
||||
0x08, 0x9e, 0x28, 0x8c, 0xd8, 0x60, 0x88, 0x55, 0x80, 0xfb, 0x68, 0x54, 0x2d, 0xc9, 0x3d, 0x30,
|
||||
0xc5, 0x6c, 0xc9, 0x03, 0x86, 0xb7, 0xb6, 0x4b, 0x33, 0x8b, 0x3c, 0x82, 0xf6, 0x8f, 0x3c, 0x89,
|
||||
0xa6, 0x72, 0x99, 0x70, 0xb1, 0x8c, 0xfc, 0x39, 0xde, 0xa0, 0x46, 0x5b, 0x0a, 0x9d, 0xe4, 0x20,
|
||||
0xf9, 0x28, 0xa3, 0x15, 0xba, 0x4c, 0xd4, 0xa5, 0xd1, 0xa6, 0xc2, 0x8f, 0x73, 0x6d, 0x9f, 0x80,
|
||||
0x5d, 0xe2, 0xa5, 0x02, 0x6b, 0x28, 0x50, 0xa3, 0xed, 0x0d, 0x33, 0x15, 0x79, 0x0c, 0xed, 0x90,
|
||||
0x2f, 0x98, 0xf4, 0x5e, 0xf3, 0xa9, 0x88, 0x59, 0x28, 0x9c, 0x3a, 0x9e, 0xf0, 0xbd, 0xf2, 0x09,
|
||||
0x1f, 0xad, 0x66, 0xaf, 0xb8, 0x1c, 0xc7, 0x2c, 0xcc, 0x8e, 0xb9, 0x95, 0xc7, 0x28, 0x4c, 0x90,
|
||||
0x8f, 0xe1, 0xce, 0x26, 0xc9, 0x9c, 0xfb, 0x92, 0x09, 0xc7, 0xea, 0x18, 0x5d, 0x42, 0x37, 0xb9,
|
||||
0xbf, 0x46, 0x74, 0x8b, 0x88, 0xea, 0x84, 0x03, 0x1d, 0xa3, 0xab, 0x15, 0x44, 0x94, 0x26, 0x94,
|
||||
0xac, 0x38, 0x12, 0x5e, 0x49, 0x56, 0xe3, 0xdf, 0xc8, 0xca, 0x63, 0x36, 0xb2, 0x36, 0x49, 0x32,
|
||||
0x59, 0xcd, 0x54, 0x56, 0x0e, 0x17, 0xb2, 0x36, 0xc4, 0x4c, 0x56, 0x2b, 0x95, 0x95, 0xc3, 0x99,
|
||||
0xac, 0xaf, 0x00, 0x12, 0x2e, 0xb8, 0x9c, 0x2e, 0xd5, 0xe9, 0xb7, 0xb1, 0xc7, 0x1f, 0x96, 0x25,
|
||||
0x6d, 0xea, 0xa7, 0x47, 0x15, 0xef, 0xcc, 0x0b, 0x25, 0xb5, 0x92, 0x7c, 0xb9, 0x5d, 0x80, 0x77,
|
||||
0xde, 0x2e, 0xc0, 0xcf, 0xc1, 0xda, 0x44, 0x6d, 0x77, 0x6a, 0x0d, 0x8c, 0x97, 0xfd, 0xb1, 0xad,
|
||||
0x11, 0x13, 0xf4, 0xe1, 0xc8, 0xd6, 0x8b, 0x6e, 0x35, 0x8e, 0x6a, 0x50, 0x45, 0xcd, 0x47, 0x4d,
|
||||
0x80, 0xe2, 0xda, 0xdd, 0x67, 0x00, 0xc5, 0xf9, 0xa8, 0xca, 0x8b, 0xae, 0xae, 0x04, 0x4f, 0x4b,
|
||||
0x79, 0x97, 0x66, 0x96, 0xc2, 0x7d, 0x1e, 0x2e, 0xe4, 0x12, 0x2b, 0xb8, 0x45, 0x33, 0xcb, 0xfd,
|
||||
0x4b, 0x03, 0x98, 0x78, 0x01, 0x1f, 0xf3, 0xc4, 0xe3, 0xe2, 0xfd, 0xfb, 0xef, 0x09, 0xd4, 0x04,
|
||||
0xb6, 0xbe, 0x70, 0x74, 0x8c, 0x20, 0xe5, 0x88, 0x74, 0x2a, 0x64, 0x21, 0x39, 0x91, 0x7c, 0x01,
|
||||
0x16, 0xcf, 0x1a, 0x5e, 0x38, 0x06, 0x46, 0xed, 0x95, 0xa3, 0xf2, 0x69, 0x90, 0xc5, 0x15, 0x64,
|
||||
0xf2, 0x25, 0xc0, 0x32, 0x3f, 0x78, 0xe1, 0x54, 0x30, 0xf4, 0xee, 0x3b, 0xaf, 0x25, 0x8b, 0x2d,
|
||||
0xd1, 0xdd, 0xc7, 0x50, 0xc5, 0x2f, 0x50, 0xd3, 0x13, 0x27, 0xae, 0x96, 0x4e, 0x4f, 0xb5, 0xde,
|
||||
0x9e, 0x23, 0x56, 0x36, 0x47, 0xdc, 0xa7, 0x60, 0x3e, 0x4f, 0xbf, 0xf3, 0x7d, 0x0f, 0xc6, 0xfd,
|
||||
0x49, 0x83, 0x26, 0xe2, 0x03, 0x26, 0x67, 0x4b, 0x9e, 0x90, 0xc7, 0x5b, 0x0f, 0xc6, 0x83, 0x5b,
|
||||
0xf1, 0x19, 0xaf, 0x57, 0x7a, 0x28, 0x72, 0xa1, 0xfa, 0xbb, 0x84, 0x1a, 0x65, 0xa1, 0x5d, 0xa8,
|
||||
0xe0, 0xd8, 0x37, 0x41, 0xef, 0xbf, 0x48, 0xeb, 0x68, 0xd8, 0x7f, 0x91, 0xd6, 0x11, 0x55, 0xa3,
|
||||
0x5e, 0x01, 0xb4, 0x6f, 0x1b, 0xee, 0x2f, 0x9a, 0x2a, 0x3e, 0x36, 0x57, 0xb5, 0x27, 0xc8, 0xff,
|
||||
0xa0, 0x26, 0x24, 0x8f, 0xa7, 0x81, 0x40, 0x5d, 0x06, 0x35, 0x95, 0x39, 0x10, 0x6a, 0xeb, 0xab,
|
||||
0x55, 0x38, 0xcb, 0xb7, 0x56, 0x6b, 0xf2, 0x7f, 0xa8, 0x0b, 0xc9, 0x12, 0xa9, 0xd8, 0xe9, 0x50,
|
||||
0xad, 0xa1, 0x3d, 0x10, 0xe4, 0x2e, 0x98, 0x3c, 0x9c, 0x4f, 0xf1, 0x52, 0x94, 0xa3, 0xca, 0xc3,
|
||||
0xf9, 0x40, 0x90, 0xfb, 0x50, 0x5f, 0x24, 0xd1, 0x2a, 0xf6, 0xc2, 0x85, 0x53, 0xed, 0x18, 0x5d,
|
||||
0x8b, 0x6e, 0x6c, 0xd2, 0x06, 0xfd, 0x72, 0x8d, 0x83, 0xad, 0x4e, 0xf5, 0xcb, 0xb5, 0xca, 0x9e,
|
||||
0xb0, 0x70, 0xc1, 0x55, 0x92, 0x5a, 0x9a, 0x1d, 0xed, 0x81, 0x70, 0x7f, 0xd5, 0xa0, 0x7a, 0xbc,
|
||||
0x5c, 0x85, 0xaf, 0xc8, 0x3e, 0x34, 0x02, 0x2f, 0x9c, 0xaa, 0x56, 0x2a, 0x34, 0x5b, 0x81, 0x17,
|
||||
0xaa, 0x1a, 0x1e, 0x08, 0xf4, 0xb3, 0xeb, 0x8d, 0x3f, 0x7b, 0x6b, 0x02, 0x76, 0x9d, 0xf9, 0x7b,
|
||||
0xd9, 0x25, 0x18, 0x78, 0x09, 0xf7, 0xcb, 0x97, 0x80, 0x1b, 0xf4, 0xfa, 0xe1, 0x2c, 0x9a, 0x7b,
|
||||
0xe1, 0xa2, 0xb8, 0x01, 0xf5, 0x86, 0xe3, 0x57, 0x35, 0x29, 0xae, 0xdd, 0x03, 0xa8, 0xe7, 0xac,
|
||||
0x5b, 0xcd, 0xfb, 0xdd, 0x48, 0x3d, 0xb1, 0x5b, 0xef, 0xaa, 0xee, 0xfe, 0x00, 0x2d, 0x4c, 0xce,
|
||||
0xe7, 0xff, 0xb5, 0xcb, 0x0e, 0xc0, 0x9c, 0xa9, 0x0c, 0x79, 0x93, 0xed, 0xde, 0x12, 0x9e, 0x07,
|
||||
0xa4, 0xb4, 0xa3, 0xbd, 0x37, 0x37, 0xfb, 0xda, 0xef, 0x37, 0xfb, 0xda, 0x9f, 0x37, 0xfb, 0xda,
|
||||
0xf7, 0xa6, 0x62, 0xc7, 0x97, 0x97, 0x26, 0xfe, 0xcd, 0x7c, 0xf6, 0x77, 0x00, 0x00, 0x00, 0xff,
|
||||
0xff, 0x53, 0x09, 0xe5, 0x37, 0xfe, 0x08, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *MetricMetadata) Marshal() (dAtA []byte, err error) {
|
||||
|
@ -2903,7 +2903,7 @@ func (m *Histogram) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.NegativeSpans = append(m.NegativeSpans, &BucketSpan{})
|
||||
m.NegativeSpans = append(m.NegativeSpans, BucketSpan{})
|
||||
if err := m.NegativeSpans[len(m.NegativeSpans)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -3069,7 +3069,7 @@ func (m *Histogram) Unmarshal(dAtA []byte) error {
|
|||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PositiveSpans = append(m.PositiveSpans, &BucketSpan{})
|
||||
m.PositiveSpans = append(m.PositiveSpans, BucketSpan{})
|
||||
if err := m.PositiveSpans[len(m.PositiveSpans)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -86,9 +86,9 @@ message Histogram {
|
|||
uint64 zero_count_int = 6;
|
||||
double zero_count_float = 7;
|
||||
}
|
||||
|
||||
|
||||
// Negative Buckets.
|
||||
repeated BucketSpan negative_spans = 8;
|
||||
repeated BucketSpan negative_spans = 8 [(gogoproto.nullable) = false];
|
||||
// Use either "negative_deltas" or "negative_counts", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
|
@ -96,7 +96,7 @@ message Histogram {
|
|||
repeated double negative_counts = 10; // Absolute count of each bucket.
|
||||
|
||||
// Positive Buckets.
|
||||
repeated BucketSpan positive_spans = 11;
|
||||
repeated BucketSpan positive_spans = 11 [(gogoproto.nullable) = false];
|
||||
// Use either "positive_deltas" or "positive_counts", the former for
|
||||
// regular histograms with integer counts, the latter for float
|
||||
// histograms.
|
||||
|
@ -107,7 +107,7 @@ message Histogram {
|
|||
// timestamp is in ms format, see model/timestamp/timestamp.go for
|
||||
// conversion from time.Time to Prometheus timestamp.
|
||||
int64 timestamp = 15;
|
||||
}
|
||||
}
|
||||
|
||||
// A BucketSpan defines a number of consecutive buckets with their
|
||||
// offset. Logically, it would be more straightforward to include the
|
||||
|
|
|
@ -25,7 +25,7 @@ import (
|
|||
|
||||
"github.com/go-kit/log"
|
||||
|
||||
"github.com/prometheus/prometheus/util/stats"
|
||||
"github.com/prometheus/prometheus/tsdb/tsdbutil"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/goleak"
|
||||
|
@ -35,7 +35,7 @@ import (
|
|||
"github.com/prometheus/prometheus/model/timestamp"
|
||||
"github.com/prometheus/prometheus/promql/parser"
|
||||
"github.com/prometheus/prometheus/storage"
|
||||
"github.com/prometheus/prometheus/tsdb"
|
||||
"github.com/prometheus/prometheus/util/stats"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
|
@ -3162,7 +3162,7 @@ func TestNativeHistogramRate(t *testing.T) {
|
|||
lbls := labels.FromStrings("__name__", seriesName)
|
||||
|
||||
app := test.Storage().Appender(context.TODO())
|
||||
for i, h := range tsdb.GenerateTestHistograms(100) {
|
||||
for i, h := range tsdbutil.GenerateTestHistograms(100) {
|
||||
_, err := app.AppendHistogram(0, lbls, int64(i)*int64(15*time.Second/time.Millisecond), h, nil)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
@ -3207,7 +3207,7 @@ func TestNativeFloatHistogramRate(t *testing.T) {
|
|||
lbls := labels.FromStrings("__name__", seriesName)
|
||||
|
||||
app := test.Storage().Appender(context.TODO())
|
||||
for i, fh := range tsdb.GenerateTestFloatHistograms(100) {
|
||||
for i, fh := range tsdbutil.GenerateTestFloatHistograms(100) {
|
||||
_, err := app.AppendHistogram(0, lbls, int64(i)*int64(15*time.Second/time.Millisecond), nil, fh)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
|
|
@ -82,7 +82,7 @@ func (s Series) String() string {
|
|||
func (s Series) MarshalJSON() ([]byte, error) {
|
||||
// Note that this is rather inefficient because it re-creates the whole
|
||||
// series, just separated by Histogram Points and Value Points. For API
|
||||
// purposes, there is a more efficcient jsoniter implementation in
|
||||
// purposes, there is a more efficient jsoniter implementation in
|
||||
// web/api/v1/api.go.
|
||||
series := struct {
|
||||
M labels.Labels `json:"metric"`
|
||||
|
|
|
@ -322,6 +322,8 @@ const resolvedRetention = 15 * time.Minute
|
|||
// Eval evaluates the rule expression and then creates pending alerts and fires
|
||||
// or removes previously pending alerts accordingly.
|
||||
func (r *AlertingRule) Eval(ctx context.Context, evalDelay time.Duration, ts time.Time, query QueryFunc, externalURL *url.URL, limit int) (promql.Vector, error) {
|
||||
ctx = NewOriginContext(ctx, NewRuleDetail(r))
|
||||
|
||||
res, err := query(ctx, r.vector.String(), ts.Add(-evalDelay))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
|
@ -895,3 +895,41 @@ func TestPendingAndKeepFiringFor(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(res))
|
||||
}
|
||||
|
||||
// TestAlertingEvalWithOrigin checks that the alerting rule details are passed through the context.
|
||||
func TestAlertingEvalWithOrigin(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
now := time.Now()
|
||||
|
||||
const (
|
||||
name = "my-recording-rule"
|
||||
query = `count(metric{foo="bar"}) > 0`
|
||||
)
|
||||
var (
|
||||
detail RuleDetail
|
||||
lbs = labels.FromStrings("test", "test")
|
||||
)
|
||||
|
||||
expr, err := parser.ParseExpr(query)
|
||||
require.NoError(t, err)
|
||||
|
||||
rule := NewAlertingRule(
|
||||
name,
|
||||
expr,
|
||||
time.Second,
|
||||
time.Minute,
|
||||
lbs,
|
||||
labels.EmptyLabels(),
|
||||
labels.EmptyLabels(),
|
||||
"",
|
||||
true, log.NewNopLogger(),
|
||||
)
|
||||
|
||||
_, err = rule.Eval(ctx, 0, now, func(ctx context.Context, qs string, _ time.Time) (promql.Vector, error) {
|
||||
detail = FromOriginContext(ctx)
|
||||
return nil, nil
|
||||
}, nil, 0)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, detail, NewRuleDetail(rule))
|
||||
}
|
||||
|
|
|
@ -40,8 +40,8 @@ import (
|
|||
"github.com/prometheus/prometheus/promql"
|
||||
"github.com/prometheus/prometheus/promql/parser"
|
||||
"github.com/prometheus/prometheus/storage"
|
||||
"github.com/prometheus/prometheus/tsdb"
|
||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||
"github.com/prometheus/prometheus/tsdb/tsdbutil"
|
||||
"github.com/prometheus/prometheus/util/teststorage"
|
||||
)
|
||||
|
||||
|
@ -1815,7 +1815,7 @@ func TestNativeHistogramsInRecordingRules(t *testing.T) {
|
|||
|
||||
// Add some histograms.
|
||||
db := suite.TSDB()
|
||||
hists := tsdb.GenerateTestHistograms(5)
|
||||
hists := tsdbutil.GenerateTestHistograms(5)
|
||||
ts := time.Now()
|
||||
app := db.Appender(context.Background())
|
||||
for i, h := range hists {
|
||||
|
|
69
rules/origin.go
Normal file
69
rules/origin.go
Normal file
|
@ -0,0 +1,69 @@
|
|||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package rules
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/prometheus/prometheus/model/labels"
|
||||
)
|
||||
|
||||
type ruleOrigin struct{}
|
||||
|
||||
// RuleDetail contains information about the rule that is being evaluated.
|
||||
type RuleDetail struct {
|
||||
Name string
|
||||
Query string
|
||||
Labels labels.Labels
|
||||
Kind string
|
||||
}
|
||||
|
||||
const (
|
||||
KindAlerting = "alerting"
|
||||
KindRecording = "recording"
|
||||
)
|
||||
|
||||
// NewRuleDetail creates a RuleDetail from a given Rule.
|
||||
func NewRuleDetail(r Rule) RuleDetail {
|
||||
var kind string
|
||||
switch r.(type) {
|
||||
case *AlertingRule:
|
||||
kind = KindAlerting
|
||||
case *RecordingRule:
|
||||
kind = KindRecording
|
||||
default:
|
||||
panic(fmt.Sprintf(`unknown rule type "%T"`, r))
|
||||
}
|
||||
|
||||
return RuleDetail{
|
||||
Name: r.Name(),
|
||||
Query: r.Query().String(),
|
||||
Labels: r.Labels(),
|
||||
Kind: kind,
|
||||
}
|
||||
}
|
||||
|
||||
// NewOriginContext returns a new context with data about the origin attached.
|
||||
func NewOriginContext(ctx context.Context, rule RuleDetail) context.Context {
|
||||
return context.WithValue(ctx, ruleOrigin{}, rule)
|
||||
}
|
||||
|
||||
// FromOriginContext returns the RuleDetail origin data from the context.
|
||||
func FromOriginContext(ctx context.Context) RuleDetail {
|
||||
if rule, ok := ctx.Value(ruleOrigin{}).(RuleDetail); ok {
|
||||
return rule
|
||||
}
|
||||
return RuleDetail{}
|
||||
}
|
51
rules/origin_test.go
Normal file
51
rules/origin_test.go
Normal file
|
@ -0,0 +1,51 @@
|
|||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package rules
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/prometheus/prometheus/model/labels"
|
||||
"github.com/prometheus/prometheus/promql"
|
||||
"github.com/prometheus/prometheus/promql/parser"
|
||||
)
|
||||
|
||||
type unknownRule struct{}
|
||||
|
||||
func (u unknownRule) Name() string { return "" }
|
||||
func (u unknownRule) Labels() labels.Labels { return nil }
|
||||
func (u unknownRule) Eval(ctx context.Context, evalDelay time.Duration, time time.Time, queryFunc QueryFunc, url *url.URL, i int) (promql.Vector, error) {
|
||||
return nil, nil
|
||||
}
|
||||
func (u unknownRule) String() string { return "" }
|
||||
func (u unknownRule) Query() parser.Expr { return nil }
|
||||
func (u unknownRule) SetLastError(err error) {}
|
||||
func (u unknownRule) LastError() error { return nil }
|
||||
func (u unknownRule) SetHealth(health RuleHealth) {}
|
||||
func (u unknownRule) Health() RuleHealth { return "" }
|
||||
func (u unknownRule) SetEvaluationDuration(duration time.Duration) {}
|
||||
func (u unknownRule) GetEvaluationDuration() time.Duration { return 0 }
|
||||
func (u unknownRule) SetEvaluationTimestamp(time time.Time) {}
|
||||
func (u unknownRule) GetEvaluationTimestamp() time.Time { return time.Time{} }
|
||||
|
||||
func TestNewRuleDetailPanics(t *testing.T) {
|
||||
require.PanicsWithValue(t, `unknown rule type "rules.unknownRule"`, func() {
|
||||
NewRuleDetail(unknownRule{})
|
||||
})
|
||||
}
|
|
@ -73,6 +73,8 @@ func (rule *RecordingRule) Labels() labels.Labels {
|
|||
|
||||
// Eval evaluates the rule and then overrides the metric names and labels accordingly.
|
||||
func (rule *RecordingRule) Eval(ctx context.Context, evalDelay time.Duration, ts time.Time, query QueryFunc, _ *url.URL, limit int) (promql.Vector, error) {
|
||||
ctx = NewOriginContext(ctx, NewRuleDetail(rule))
|
||||
|
||||
vector, err := query(ctx, rule.vector.String(), ts.Add(-evalDelay))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
|
@ -156,3 +156,31 @@ func TestRecordingRuleLimit(t *testing.T) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestRecordingEvalWithOrigin checks that the recording rule details are passed through the context.
|
||||
func TestRecordingEvalWithOrigin(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
now := time.Now()
|
||||
|
||||
const (
|
||||
name = "my-recording-rule"
|
||||
query = `count(metric{foo="bar"})`
|
||||
)
|
||||
|
||||
var (
|
||||
detail RuleDetail
|
||||
lbs = labels.FromStrings("foo", "bar")
|
||||
)
|
||||
|
||||
expr, err := parser.ParseExpr(query)
|
||||
require.NoError(t, err)
|
||||
|
||||
rule := NewRecordingRule(name, expr, lbs)
|
||||
_, err = rule.Eval(ctx, 0, now, func(ctx context.Context, qs string, _ time.Time) (promql.Vector, error) {
|
||||
detail = FromOriginContext(ctx)
|
||||
return nil, nil
|
||||
}, nil, 0)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, detail, NewRuleDetail(rule))
|
||||
}
|
||||
|
|
|
@ -270,8 +270,13 @@ func (m *Manager) ApplyConfig(cfg *config.Config) error {
|
|||
m.mtxScrape.Lock()
|
||||
defer m.mtxScrape.Unlock()
|
||||
|
||||
scfgs, err := cfg.GetScrapeConfigs()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c := make(map[string]*config.ScrapeConfig)
|
||||
for _, scfg := range cfg.ScrapeConfigs {
|
||||
for _, scfg := range scfgs {
|
||||
c[scfg.JobName] = scfg
|
||||
}
|
||||
m.scrapeConfigs = c
|
||||
|
|
|
@ -328,7 +328,7 @@ func newScrapePool(cfg *config.ScrapeConfig, app storage.Appendable, jitterSeed
|
|||
options.PassMetadataInContext,
|
||||
)
|
||||
}
|
||||
|
||||
targetScrapePoolTargetLimit.WithLabelValues(sp.config.JobName).Set(float64(sp.config.TargetLimit))
|
||||
return sp, nil
|
||||
}
|
||||
|
||||
|
|
|
@ -75,12 +75,17 @@ function bumpVersion() {
|
|||
if [[ "${version}" == v* ]]; then
|
||||
version="${version:1}"
|
||||
fi
|
||||
# increase the version on all packages
|
||||
npm version "${version}" --workspaces
|
||||
# upgrade the @prometheus-io/* dependencies on all packages
|
||||
for workspace in ${workspaces}; do
|
||||
sed -E -i "s|(\"@prometheus-io/.+\": )\".+\"|\1\"\^${version}\"|" "${workspace}"/package.json
|
||||
# sed -i syntax is different on mac and linux
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
sed -E -i "" "s|(\"@prometheus-io/.+\": )\".+\"|\1\"${version}\"|" "${workspace}"/package.json
|
||||
else
|
||||
sed -E -i "s|(\"@prometheus-io/.+\": )\".+\"|\1\"${version}\"|" "${workspace}"/package.json
|
||||
fi
|
||||
done
|
||||
# increase the version on all packages
|
||||
npm version "${version}" --workspaces
|
||||
}
|
||||
|
||||
if [[ "$1" == "--copy" ]]; then
|
||||
|
|
|
@ -23,7 +23,6 @@ import (
|
|||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/prometheus/prometheus/model/histogram"
|
||||
"github.com/prometheus/prometheus/model/labels"
|
||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||
"github.com/prometheus/prometheus/tsdb/tsdbutil"
|
||||
|
@ -387,18 +386,11 @@ func TestCompactingChunkSeriesMerger(t *testing.T) {
|
|||
|
||||
// histogramSample returns a histogram that is unique to the ts.
|
||||
histogramSample := func(ts int64) sample {
|
||||
idx := ts + 1
|
||||
return sample{t: ts, h: &histogram.Histogram{
|
||||
Schema: 2,
|
||||
ZeroThreshold: 0.001,
|
||||
ZeroCount: 2 * uint64(idx),
|
||||
Count: 5 * uint64(idx),
|
||||
Sum: 12.34 * float64(idx),
|
||||
PositiveSpans: []histogram.Span{{Offset: 1, Length: 2}, {Offset: 2, Length: 1}},
|
||||
NegativeSpans: []histogram.Span{{Offset: 2, Length: 1}, {Offset: 1, Length: 2}},
|
||||
PositiveBuckets: []int64{1 * idx, -1 * idx, 3 * idx},
|
||||
NegativeBuckets: []int64{1 * idx, 2 * idx, 3 * idx},
|
||||
}}
|
||||
return sample{t: ts, h: tsdbutil.GenerateTestHistogram(int(ts + 1))}
|
||||
}
|
||||
|
||||
floatHistogramSample := func(ts int64) sample {
|
||||
return sample{t: ts, fh: tsdbutil.GenerateTestFloatHistogram(int(ts + 1))}
|
||||
}
|
||||
|
||||
for _, tc := range []struct {
|
||||
|
@ -529,6 +521,46 @@ func TestCompactingChunkSeriesMerger(t *testing.T) {
|
|||
[]tsdbutil.Sample{histogramSample(15)},
|
||||
),
|
||||
},
|
||||
{
|
||||
name: "float histogram chunks overlapping",
|
||||
input: []ChunkSeries{
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{floatHistogramSample(0), floatHistogramSample(5)}, []tsdbutil.Sample{floatHistogramSample(10), floatHistogramSample(15)}),
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{floatHistogramSample(2), floatHistogramSample(20)}, []tsdbutil.Sample{floatHistogramSample(25), floatHistogramSample(30)}),
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{floatHistogramSample(18), floatHistogramSample(26)}, []tsdbutil.Sample{floatHistogramSample(31), floatHistogramSample(35)}),
|
||||
},
|
||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||
[]tsdbutil.Sample{floatHistogramSample(0), floatHistogramSample(2), floatHistogramSample(5), floatHistogramSample(10), floatHistogramSample(15), floatHistogramSample(18), floatHistogramSample(20), floatHistogramSample(25), floatHistogramSample(26), floatHistogramSample(30)},
|
||||
[]tsdbutil.Sample{floatHistogramSample(31), floatHistogramSample(35)},
|
||||
),
|
||||
},
|
||||
{
|
||||
name: "float histogram chunks overlapping with float chunks",
|
||||
input: []ChunkSeries{
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{floatHistogramSample(0), floatHistogramSample(5)}, []tsdbutil.Sample{floatHistogramSample(10), floatHistogramSample(15)}),
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{12, 12, nil, nil}}, []tsdbutil.Sample{sample{14, 14, nil, nil}}),
|
||||
},
|
||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||
[]tsdbutil.Sample{floatHistogramSample(0)},
|
||||
[]tsdbutil.Sample{sample{1, 1, nil, nil}},
|
||||
[]tsdbutil.Sample{floatHistogramSample(5), floatHistogramSample(10)},
|
||||
[]tsdbutil.Sample{sample{12, 12, nil, nil}, sample{14, 14, nil, nil}},
|
||||
[]tsdbutil.Sample{floatHistogramSample(15)},
|
||||
),
|
||||
},
|
||||
{
|
||||
name: "float histogram chunks overlapping with histogram chunks",
|
||||
input: []ChunkSeries{
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{floatHistogramSample(0), floatHistogramSample(5)}, []tsdbutil.Sample{floatHistogramSample(10), floatHistogramSample(15)}),
|
||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{histogramSample(1), histogramSample(12)}, []tsdbutil.Sample{histogramSample(14)}),
|
||||
},
|
||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||
[]tsdbutil.Sample{floatHistogramSample(0)},
|
||||
[]tsdbutil.Sample{histogramSample(1)},
|
||||
[]tsdbutil.Sample{floatHistogramSample(5), floatHistogramSample(10)},
|
||||
[]tsdbutil.Sample{histogramSample(12), histogramSample(14)},
|
||||
[]tsdbutil.Sample{floatHistogramSample(15)},
|
||||
),
|
||||
},
|
||||
} {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
merged := m(tc.input...)
|
||||
|
|
|
@ -87,7 +87,7 @@ func TestStoreHTTPErrorHandling(t *testing.T) {
|
|||
func TestClientRetryAfter(t *testing.T) {
|
||||
server := httptest.NewServer(
|
||||
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
http.Error(w, longErrMessage, 429)
|
||||
http.Error(w, longErrMessage, http.StatusTooManyRequests)
|
||||
}),
|
||||
)
|
||||
defer server.Close()
|
||||
|
|
|
@ -559,7 +559,7 @@ func HistogramProtoToFloatHistogram(hp prompb.Histogram) *histogram.FloatHistogr
|
|||
}
|
||||
}
|
||||
|
||||
func spansProtoToSpans(s []*prompb.BucketSpan) []histogram.Span {
|
||||
func spansProtoToSpans(s []prompb.BucketSpan) []histogram.Span {
|
||||
spans := make([]histogram.Span, len(s))
|
||||
for i := 0; i < len(s); i++ {
|
||||
spans[i] = histogram.Span{Offset: s[i].Offset, Length: s[i].Length}
|
||||
|
@ -600,10 +600,10 @@ func FloatHistogramToHistogramProto(timestamp int64, fh *histogram.FloatHistogra
|
|||
}
|
||||
}
|
||||
|
||||
func spansToSpansProto(s []histogram.Span) []*prompb.BucketSpan {
|
||||
spans := make([]*prompb.BucketSpan, len(s))
|
||||
func spansToSpansProto(s []histogram.Span) []prompb.BucketSpan {
|
||||
spans := make([]prompb.BucketSpan, len(s))
|
||||
for i := 0; i < len(s); i++ {
|
||||
spans[i] = &prompb.BucketSpan{Offset: s[i].Offset, Length: s[i].Length}
|
||||
spans[i] = prompb.BucketSpan{Offset: s[i].Offset, Length: s[i].Length}
|
||||
}
|
||||
|
||||
return spans
|
||||
|
|
|
@ -399,7 +399,7 @@ func exampleHistogramProto() prompb.Histogram {
|
|||
Schema: 0,
|
||||
ZeroThreshold: 0,
|
||||
ZeroCount: &prompb.Histogram_ZeroCountInt{ZeroCountInt: 0},
|
||||
NegativeSpans: []*prompb.BucketSpan{
|
||||
NegativeSpans: []prompb.BucketSpan{
|
||||
{
|
||||
Offset: 0,
|
||||
Length: 5,
|
||||
|
@ -414,7 +414,7 @@ func exampleHistogramProto() prompb.Histogram {
|
|||
},
|
||||
},
|
||||
NegativeDeltas: []int64{1, 2, -2, 1, -1, 0},
|
||||
PositiveSpans: []*prompb.BucketSpan{
|
||||
PositiveSpans: []prompb.BucketSpan{
|
||||
{
|
||||
Offset: 0,
|
||||
Length: 4,
|
||||
|
@ -497,7 +497,7 @@ func exampleFloatHistogramProto() prompb.Histogram {
|
|||
Schema: 0,
|
||||
ZeroThreshold: 0,
|
||||
ZeroCount: &prompb.Histogram_ZeroCountFloat{ZeroCountFloat: 0},
|
||||
NegativeSpans: []*prompb.BucketSpan{
|
||||
NegativeSpans: []prompb.BucketSpan{
|
||||
{
|
||||
Offset: 0,
|
||||
Length: 5,
|
||||
|
@ -512,7 +512,7 @@ func exampleFloatHistogramProto() prompb.Histogram {
|
|||
},
|
||||
},
|
||||
NegativeCounts: []float64{1, 2, -2, 1, -1, 0},
|
||||
PositiveSpans: []*prompb.BucketSpan{
|
||||
PositiveSpans: []prompb.BucketSpan{
|
||||
{
|
||||
Offset: 0,
|
||||
Length: 4,
|
||||
|
|
|
@ -894,6 +894,10 @@ func (t *QueueManager) releaseLabels(ls labels.Labels) {
|
|||
// processExternalLabels merges externalLabels into ls. If ls contains
|
||||
// a label in externalLabels, the value in ls wins.
|
||||
func processExternalLabels(ls labels.Labels, externalLabels []labels.Label) labels.Labels {
|
||||
if len(externalLabels) == 0 {
|
||||
return ls
|
||||
}
|
||||
|
||||
b := labels.NewScratchBuilder(ls.Len() + len(externalLabels))
|
||||
j := 0
|
||||
ls.Range(func(l labels.Label) {
|
||||
|
|
|
@ -54,10 +54,10 @@ func TestDB_InvalidSeries(t *testing.T) {
|
|||
})
|
||||
|
||||
t.Run("Histograms", func(t *testing.T) {
|
||||
_, err := app.AppendHistogram(0, labels.Labels{}, 0, tsdb.GenerateTestHistograms(1)[0], nil)
|
||||
_, err := app.AppendHistogram(0, labels.Labels{}, 0, tsdbutil.GenerateTestHistograms(1)[0], nil)
|
||||
require.ErrorIs(t, err, tsdb.ErrInvalidSample, "should reject empty labels")
|
||||
|
||||
_, err = app.AppendHistogram(0, labels.FromStrings("a", "1", "a", "2"), 0, tsdb.GenerateTestHistograms(1)[0], nil)
|
||||
_, err = app.AppendHistogram(0, labels.FromStrings("a", "1", "a", "2"), 0, tsdbutil.GenerateTestHistograms(1)[0], nil)
|
||||
require.ErrorIs(t, err, tsdb.ErrInvalidSample, "should reject duplicate labels")
|
||||
})
|
||||
|
||||
|
@ -151,7 +151,7 @@ func TestCommit(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numHistograms)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(i), histograms[i], nil)
|
||||
|
@ -163,7 +163,7 @@ func TestCommit(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numHistograms)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(i), nil, floatHistograms[i])
|
||||
|
@ -257,7 +257,7 @@ func TestRollback(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numHistograms)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(i), histograms[i], nil)
|
||||
|
@ -269,7 +269,7 @@ func TestRollback(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numHistograms)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(i), nil, floatHistograms[i])
|
||||
|
@ -374,7 +374,7 @@ func TestFullTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numHistograms)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(lastTs), histograms[i], nil)
|
||||
|
@ -387,7 +387,7 @@ func TestFullTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numHistograms)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, int64(lastTs), nil, floatHistograms[i])
|
||||
|
@ -436,7 +436,7 @@ func TestPartialTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numDatapoints)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numDatapoints)
|
||||
|
||||
for i := 0; i < numDatapoints; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, histograms[i], nil)
|
||||
|
@ -449,7 +449,7 @@ func TestPartialTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numDatapoints)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numDatapoints)
|
||||
|
||||
for i := 0; i < numDatapoints; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, nil, floatHistograms[i])
|
||||
|
@ -475,7 +475,7 @@ func TestPartialTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numDatapoints)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numDatapoints)
|
||||
|
||||
for i := 0; i < numDatapoints; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, histograms[i], nil)
|
||||
|
@ -488,7 +488,7 @@ func TestPartialTruncateWAL(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numDatapoints)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numDatapoints)
|
||||
|
||||
for i := 0; i < numDatapoints; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, nil, floatHistograms[i])
|
||||
|
@ -529,7 +529,7 @@ func TestWALReplay(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
histograms := tsdb.GenerateTestHistograms(numHistograms)
|
||||
histograms := tsdbutil.GenerateTestHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, histograms[i], nil)
|
||||
|
@ -541,7 +541,7 @@ func TestWALReplay(t *testing.T) {
|
|||
for _, l := range lbls {
|
||||
lset := labels.New(l...)
|
||||
|
||||
floatHistograms := tsdb.GenerateTestFloatHistograms(numHistograms)
|
||||
floatHistograms := tsdbutil.GenerateTestFloatHistograms(numHistograms)
|
||||
|
||||
for i := 0; i < numHistograms; i++ {
|
||||
_, err := app.AppendHistogram(0, lset, lastTs, nil, floatHistograms[i])
|
||||
|
@ -622,7 +622,7 @@ func Test_ExistingWAL_NextRef(t *testing.T) {
|
|||
}
|
||||
|
||||
histogramCount := 10
|
||||
histograms := tsdb.GenerateTestHistograms(histogramCount)
|
||||
histograms := tsdbutil.GenerateTestHistograms(histogramCount)
|
||||
// Append <histogramCount> series
|
||||
for i := 0; i < histogramCount; i++ {
|
||||
lset := labels.FromStrings(model.MetricNameLabel, fmt.Sprintf("histogram_%d", i))
|
||||
|
|
|
@ -131,6 +131,10 @@ type Options struct {
|
|||
// WALCompression will turn on Snappy compression for records on the WAL.
|
||||
WALCompression bool
|
||||
|
||||
// Maximum number of CPUs that can simultaneously processes WAL replay.
|
||||
// If it is <=0, then GOMAXPROCS is used.
|
||||
WALReplayConcurrency int
|
||||
|
||||
// StripeSize is the size in entries of the series hash map. Reducing the size will save memory but impact performance.
|
||||
StripeSize int
|
||||
|
||||
|
@ -813,6 +817,9 @@ func open(dir string, l log.Logger, r prometheus.Registerer, opts *Options, rngs
|
|||
headOpts.PostingsForMatchersCacheTTL = opts.HeadPostingsForMatchersCacheTTL
|
||||
headOpts.PostingsForMatchersCacheSize = opts.HeadPostingsForMatchersCacheSize
|
||||
headOpts.PostingsForMatchersCacheForce = opts.HeadPostingsForMatchersCacheForce
|
||||
if opts.WALReplayConcurrency > 0 {
|
||||
headOpts.WALReplayConcurrency = opts.WALReplayConcurrency
|
||||
}
|
||||
if opts.IsolationDisabled {
|
||||
// We only override this flag if isolation is disabled at DB level. We use the default otherwise.
|
||||
headOpts.IsolationDisabled = opts.IsolationDisabled
|
||||
|
|
164
tsdb/head.go
164
tsdb/head.go
|
@ -17,8 +17,8 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"math/rand"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
|
@ -59,6 +59,8 @@ var (
|
|||
|
||||
// defaultIsolationDisabled is true if isolation is disabled by default.
|
||||
defaultIsolationDisabled = false
|
||||
|
||||
defaultWALReplayConcurrency = runtime.GOMAXPROCS(0)
|
||||
)
|
||||
|
||||
// chunkDiskMapper is a temporary interface while we transition from
|
||||
|
@ -176,6 +178,11 @@ type HeadOptions struct {
|
|||
PostingsForMatchersCacheTTL time.Duration
|
||||
PostingsForMatchersCacheSize int
|
||||
PostingsForMatchersCacheForce bool
|
||||
|
||||
// Maximum number of CPUs that can simultaneously processes WAL replay.
|
||||
// The default value is GOMAXPROCS.
|
||||
// If it is set to a negative value or zero, the default value is used.
|
||||
WALReplayConcurrency int
|
||||
}
|
||||
|
||||
const (
|
||||
|
@ -197,6 +204,7 @@ func DefaultHeadOptions() *HeadOptions {
|
|||
PostingsForMatchersCacheTTL: defaultPostingsForMatchersCacheTTL,
|
||||
PostingsForMatchersCacheSize: defaultPostingsForMatchersCacheSize,
|
||||
PostingsForMatchersCacheForce: false,
|
||||
WALReplayConcurrency: defaultWALReplayConcurrency,
|
||||
}
|
||||
ho.OutOfOrderCapMax.Store(DefaultOutOfOrderCapMax)
|
||||
return ho
|
||||
|
@ -274,6 +282,10 @@ func NewHead(r prometheus.Registerer, l log.Logger, wal, wbl *wlog.WL, opts *Hea
|
|||
opts.ChunkPool = chunkenc.NewPool()
|
||||
}
|
||||
|
||||
if opts.WALReplayConcurrency <= 0 {
|
||||
opts.WALReplayConcurrency = defaultWALReplayConcurrency
|
||||
}
|
||||
|
||||
h.chunkDiskMapper, err = chunks.NewChunkDiskMapper(
|
||||
r,
|
||||
mmappedChunksDir(opts.ChunkDirRoot),
|
||||
|
@ -530,6 +542,17 @@ func newHeadMetrics(h *Head, r prometheus.Registerer) *headMetrics {
|
|||
}, func() float64 {
|
||||
return float64(h.iso.lastAppendID())
|
||||
}),
|
||||
prometheus.NewGaugeFunc(prometheus.GaugeOpts{
|
||||
Name: "prometheus_tsdb_head_chunks_storage_size_bytes",
|
||||
Help: "Size of the chunks_head directory.",
|
||||
}, func() float64 {
|
||||
val, err := h.chunkDiskMapper.Size()
|
||||
if err != nil {
|
||||
level.Error(h.logger).Log("msg", "Failed to calculate size of \"chunks_head\" dir",
|
||||
"err", err.Error())
|
||||
}
|
||||
return float64(val)
|
||||
}),
|
||||
)
|
||||
}
|
||||
return m
|
||||
|
@ -596,20 +619,47 @@ func (h *Head) Init(minValidTime int64) error {
|
|||
|
||||
if h.opts.EnableMemorySnapshotOnShutdown {
|
||||
level.Info(h.logger).Log("msg", "Chunk snapshot is enabled, replaying from the snapshot")
|
||||
var err error
|
||||
snapIdx, snapOffset, refSeries, err = h.loadChunkSnapshot()
|
||||
if err != nil {
|
||||
snapIdx, snapOffset = -1, 0
|
||||
refSeries = make(map[chunks.HeadSeriesRef]*memSeries)
|
||||
// If there are any WAL files, there should be at least one WAL file with an index that is current or newer
|
||||
// than the snapshot index. If the WAL index is behind the snapshot index somehow, the snapshot is assumed
|
||||
// to be outdated.
|
||||
loadSnapshot := true
|
||||
if h.wal != nil {
|
||||
_, endAt, err := wlog.Segments(h.wal.Dir())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "finding WAL segments")
|
||||
}
|
||||
|
||||
h.metrics.snapshotReplayErrorTotal.Inc()
|
||||
level.Error(h.logger).Log("msg", "Failed to load chunk snapshot", "err", err)
|
||||
// We clear the partially loaded data to replay fresh from the WAL.
|
||||
if err := h.resetInMemoryState(); err != nil {
|
||||
return err
|
||||
_, idx, _, err := LastChunkSnapshot(h.opts.ChunkDirRoot)
|
||||
if err != nil && err != record.ErrNotFound {
|
||||
level.Error(h.logger).Log("msg", "Could not find last snapshot", "err", err)
|
||||
}
|
||||
|
||||
if err == nil && endAt < idx {
|
||||
loadSnapshot = false
|
||||
level.Warn(h.logger).Log("msg", "Last WAL file is behind snapshot, removing snapshots")
|
||||
if err := DeleteChunkSnapshots(h.opts.ChunkDirRoot, math.MaxInt, math.MaxInt); err != nil {
|
||||
level.Error(h.logger).Log("msg", "Error while deleting snapshot directories", "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
if loadSnapshot {
|
||||
var err error
|
||||
snapIdx, snapOffset, refSeries, err = h.loadChunkSnapshot()
|
||||
if err == nil {
|
||||
level.Info(h.logger).Log("msg", "Chunk snapshot loading time", "duration", time.Since(start).String())
|
||||
}
|
||||
if err != nil {
|
||||
snapIdx, snapOffset = -1, 0
|
||||
refSeries = make(map[chunks.HeadSeriesRef]*memSeries)
|
||||
|
||||
h.metrics.snapshotReplayErrorTotal.Inc()
|
||||
level.Error(h.logger).Log("msg", "Failed to load chunk snapshot", "err", err)
|
||||
// We clear the partially loaded data to replay fresh from the WAL.
|
||||
if err := h.resetInMemoryState(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
level.Info(h.logger).Log("msg", "Chunk snapshot loading time", "duration", time.Since(start).String())
|
||||
}
|
||||
|
||||
mmapChunkReplayStart := time.Now()
|
||||
|
@ -2073,93 +2123,3 @@ func (h *Head) updateWALReplayStatusRead(current int) {
|
|||
|
||||
h.stats.WALReplayStatus.Current = current
|
||||
}
|
||||
|
||||
func GenerateTestHistograms(n int) (r []*histogram.Histogram) {
|
||||
for i := 0; i < n; i++ {
|
||||
h := GenerateTestHistogram(i)
|
||||
if i > 0 {
|
||||
h.CounterResetHint = histogram.NotCounterReset
|
||||
}
|
||||
r = append(r, h)
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// Generates a test histogram, it is up to the user to set any known counter reset hint.
|
||||
func GenerateTestHistogram(i int) *histogram.Histogram {
|
||||
return &histogram.Histogram{
|
||||
Count: 10 + uint64(i*8),
|
||||
ZeroCount: 2 + uint64(i),
|
||||
ZeroThreshold: 0.001,
|
||||
Sum: 18.4 * float64(i+1),
|
||||
Schema: 1,
|
||||
PositiveSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
PositiveBuckets: []int64{int64(i + 1), 1, -1, 0},
|
||||
NegativeSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
NegativeBuckets: []int64{int64(i + 1), 1, -1, 0},
|
||||
}
|
||||
}
|
||||
|
||||
func GenerateTestGaugeHistograms(n int) (r []*histogram.Histogram) {
|
||||
for x := 0; x < n; x++ {
|
||||
r = append(r, GenerateTestGaugeHistogram(rand.Intn(n)))
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
func GenerateTestGaugeHistogram(i int) *histogram.Histogram {
|
||||
h := GenerateTestHistogram(i)
|
||||
h.CounterResetHint = histogram.GaugeType
|
||||
return h
|
||||
}
|
||||
|
||||
func GenerateTestFloatHistograms(n int) (r []*histogram.FloatHistogram) {
|
||||
for i := 0; i < n; i++ {
|
||||
h := GenerateTestFloatHistogram(i)
|
||||
if i > 0 {
|
||||
h.CounterResetHint = histogram.NotCounterReset
|
||||
}
|
||||
r = append(r, h)
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// Generates a test float histogram, it is up to the user to set any known counter reset hint.
|
||||
func GenerateTestFloatHistogram(i int) *histogram.FloatHistogram {
|
||||
return &histogram.FloatHistogram{
|
||||
Count: 10 + float64(i*8),
|
||||
ZeroCount: 2 + float64(i),
|
||||
ZeroThreshold: 0.001,
|
||||
Sum: 18.4 * float64(i+1),
|
||||
Schema: 1,
|
||||
PositiveSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
PositiveBuckets: []float64{float64(i + 1), float64(i + 2), float64(i + 1), float64(i + 1)},
|
||||
NegativeSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
NegativeBuckets: []float64{float64(i + 1), float64(i + 2), float64(i + 1), float64(i + 1)},
|
||||
}
|
||||
}
|
||||
|
||||
func GenerateTestGaugeFloatHistograms(n int) (r []*histogram.FloatHistogram) {
|
||||
for x := 0; x < n; x++ {
|
||||
r = append(r, GenerateTestGaugeFloatHistogram(rand.Intn(n)))
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
func GenerateTestGaugeFloatHistogram(i int) *histogram.FloatHistogram {
|
||||
h := GenerateTestFloatHistogram(i)
|
||||
h.CounterResetHint = histogram.GaugeType
|
||||
return h
|
||||
}
|
||||
|
|
|
@ -333,12 +333,10 @@ func (h *headChunkReader) Chunk(meta chunks.Meta) (chunkenc.Chunk, error) {
|
|||
s.Unlock()
|
||||
|
||||
return &safeChunk{
|
||||
Chunk: c.chunk,
|
||||
s: s,
|
||||
cid: cid,
|
||||
isoState: h.isoState,
|
||||
chunkDiskMapper: h.head.chunkDiskMapper,
|
||||
memChunkPool: &h.head.memChunkPool,
|
||||
Chunk: c.chunk,
|
||||
s: s,
|
||||
cid: cid,
|
||||
isoState: h.isoState,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
@ -629,43 +627,24 @@ func (b boundedIterator) Seek(t int64) chunkenc.ValueType {
|
|||
// safeChunk makes sure that the chunk can be accessed without a race condition
|
||||
type safeChunk struct {
|
||||
chunkenc.Chunk
|
||||
s *memSeries
|
||||
cid chunks.HeadChunkID
|
||||
isoState *isolationState
|
||||
chunkDiskMapper chunkDiskMapper
|
||||
memChunkPool *sync.Pool
|
||||
s *memSeries
|
||||
cid chunks.HeadChunkID
|
||||
isoState *isolationState
|
||||
}
|
||||
|
||||
func (c *safeChunk) Iterator(reuseIter chunkenc.Iterator) chunkenc.Iterator {
|
||||
c.s.Lock()
|
||||
it := c.s.iterator(c.cid, c.isoState, c.chunkDiskMapper, c.memChunkPool, reuseIter)
|
||||
it := c.s.iterator(c.cid, c.Chunk, c.isoState, reuseIter)
|
||||
c.s.Unlock()
|
||||
return it
|
||||
}
|
||||
|
||||
// iterator returns a chunk iterator for the requested chunkID, or a NopIterator if the requested ID is out of range.
|
||||
// It is unsafe to call this concurrently with s.append(...) without holding the series lock.
|
||||
func (s *memSeries) iterator(id chunks.HeadChunkID, isoState *isolationState, chunkDiskMapper chunkDiskMapper, memChunkPool *sync.Pool, it chunkenc.Iterator) chunkenc.Iterator {
|
||||
c, garbageCollect, err := s.chunk(id, chunkDiskMapper, memChunkPool)
|
||||
// TODO(fabxc): Work around! An error will be returns when a querier have retrieved a pointer to a
|
||||
// series's chunk, which got then garbage collected before it got
|
||||
// accessed. We must ensure to not garbage collect as long as any
|
||||
// readers still hold a reference.
|
||||
if err != nil {
|
||||
return chunkenc.NewNopIterator()
|
||||
}
|
||||
defer func() {
|
||||
if garbageCollect {
|
||||
// Set this to nil so that Go GC can collect it after it has been used.
|
||||
// This should be done always at the end.
|
||||
c.chunk = nil
|
||||
memChunkPool.Put(c)
|
||||
}
|
||||
}()
|
||||
|
||||
func (s *memSeries) iterator(id chunks.HeadChunkID, c chunkenc.Chunk, isoState *isolationState, it chunkenc.Iterator) chunkenc.Iterator {
|
||||
ix := int(id) - int(s.firstChunkID)
|
||||
|
||||
numSamples := c.chunk.NumSamples()
|
||||
numSamples := c.NumSamples()
|
||||
stopAfter := numSamples
|
||||
|
||||
if isoState != nil && !isoState.IsolationDisabled() {
|
||||
|
@ -710,9 +689,9 @@ func (s *memSeries) iterator(id chunks.HeadChunkID, isoState *isolationState, ch
|
|||
return chunkenc.NewNopIterator()
|
||||
}
|
||||
if stopAfter == numSamples {
|
||||
return c.chunk.Iterator(it)
|
||||
return c.Iterator(it)
|
||||
}
|
||||
return makeStopIterator(c.chunk, it, stopAfter)
|
||||
return makeStopIterator(c, it, stopAfter)
|
||||
}
|
||||
|
||||
// stopIterator wraps an Iterator, but only returns the first
|
||||
|
|
|
@ -538,11 +538,18 @@ func TestHead_ReadWAL(t *testing.T) {
|
|||
require.NoError(t, c.Err())
|
||||
return x
|
||||
}
|
||||
require.Equal(t, []sample{{100, 2, nil, nil}, {101, 5, nil, nil}}, expandChunk(s10.iterator(0, nil, head.chunkDiskMapper, nil, nil)))
|
||||
require.Equal(t, []sample{{101, 6, nil, nil}}, expandChunk(s50.iterator(0, nil, head.chunkDiskMapper, nil, nil)))
|
||||
|
||||
c, _, err := s10.chunk(0, head.chunkDiskMapper, &head.memChunkPool)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []sample{{100, 2, nil, nil}, {101, 5, nil, nil}}, expandChunk(c.chunk.Iterator(nil)))
|
||||
c, _, err = s50.chunk(0, head.chunkDiskMapper, &head.memChunkPool)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []sample{{101, 6, nil, nil}}, expandChunk(c.chunk.Iterator(nil)))
|
||||
// The samples before the new series record should be discarded since a duplicate record
|
||||
// is only possible when old samples were compacted.
|
||||
require.Equal(t, []sample{{101, 7, nil, nil}}, expandChunk(s100.iterator(0, nil, head.chunkDiskMapper, nil, nil)))
|
||||
c, _, err = s100.chunk(0, head.chunkDiskMapper, &head.memChunkPool)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []sample{{101, 7, nil, nil}}, expandChunk(c.chunk.Iterator(nil)))
|
||||
|
||||
q, err := head.ExemplarQuerier(context.Background())
|
||||
require.NoError(t, err)
|
||||
|
@ -1347,7 +1354,7 @@ func TestMemSeries_appendHistogram(t *testing.T) {
|
|||
lbls := labels.Labels{}
|
||||
s := newMemSeries(lbls, 1, labels.StableHash(lbls), 0, defaultIsolationDisabled)
|
||||
|
||||
histograms := GenerateTestHistograms(4)
|
||||
histograms := tsdbutil.GenerateTestHistograms(4)
|
||||
histogramWithOneMoreBucket := histograms[3].Copy()
|
||||
histogramWithOneMoreBucket.Count++
|
||||
histogramWithOneMoreBucket.Sum += 1.23
|
||||
|
@ -2636,7 +2643,13 @@ func TestIteratorSeekIntoBuffer(t *testing.T) {
|
|||
require.True(t, ok, "sample append failed")
|
||||
}
|
||||
|
||||
it := s.iterator(s.headChunkID(len(s.mmappedChunks)), nil, chunkDiskMapper, nil, nil)
|
||||
c, _, err := s.chunk(0, chunkDiskMapper, &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &memChunk{}
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
it := c.chunk.Iterator(nil)
|
||||
|
||||
// First point.
|
||||
require.Equal(t, chunkenc.ValFloat, it.Seek(0))
|
||||
|
@ -2906,7 +2919,7 @@ func TestAppendHistogram(t *testing.T) {
|
|||
expHistograms := make([]tsdbutil.Sample, 0, 2*numHistograms)
|
||||
|
||||
// Counter integer histograms.
|
||||
for _, h := range GenerateTestHistograms(numHistograms) {
|
||||
for _, h := range tsdbutil.GenerateTestHistograms(numHistograms) {
|
||||
_, err := app.AppendHistogram(0, l, ingestTs, h, nil)
|
||||
require.NoError(t, err)
|
||||
expHistograms = append(expHistograms, sample{t: ingestTs, h: h})
|
||||
|
@ -2918,7 +2931,7 @@ func TestAppendHistogram(t *testing.T) {
|
|||
}
|
||||
|
||||
// Gauge integer histograms.
|
||||
for _, h := range GenerateTestGaugeHistograms(numHistograms) {
|
||||
for _, h := range tsdbutil.GenerateTestGaugeHistograms(numHistograms) {
|
||||
_, err := app.AppendHistogram(0, l, ingestTs, h, nil)
|
||||
require.NoError(t, err)
|
||||
expHistograms = append(expHistograms, sample{t: ingestTs, h: h})
|
||||
|
@ -2932,7 +2945,7 @@ func TestAppendHistogram(t *testing.T) {
|
|||
expFloatHistograms := make([]tsdbutil.Sample, 0, 2*numHistograms)
|
||||
|
||||
// Counter float histograms.
|
||||
for _, fh := range GenerateTestFloatHistograms(numHistograms) {
|
||||
for _, fh := range tsdbutil.GenerateTestFloatHistograms(numHistograms) {
|
||||
_, err := app.AppendHistogram(0, l, ingestTs, nil, fh)
|
||||
require.NoError(t, err)
|
||||
expFloatHistograms = append(expFloatHistograms, sample{t: ingestTs, fh: fh})
|
||||
|
@ -2944,7 +2957,7 @@ func TestAppendHistogram(t *testing.T) {
|
|||
}
|
||||
|
||||
// Gauge float histograms.
|
||||
for _, fh := range GenerateTestGaugeFloatHistograms(numHistograms) {
|
||||
for _, fh := range tsdbutil.GenerateTestGaugeFloatHistograms(numHistograms) {
|
||||
_, err := app.AppendHistogram(0, l, ingestTs, nil, fh)
|
||||
require.NoError(t, err)
|
||||
expFloatHistograms = append(expFloatHistograms, sample{t: ingestTs, fh: fh})
|
||||
|
@ -3014,9 +3027,9 @@ func TestHistogramInWALAndMmapChunk(t *testing.T) {
|
|||
app = head.Appender(context.Background())
|
||||
var hists []*histogram.Histogram
|
||||
if gauge {
|
||||
hists = GenerateTestGaugeHistograms(numHistograms)
|
||||
hists = tsdbutil.GenerateTestGaugeHistograms(numHistograms)
|
||||
} else {
|
||||
hists = GenerateTestHistograms(numHistograms)
|
||||
hists = tsdbutil.GenerateTestHistograms(numHistograms)
|
||||
}
|
||||
for _, h := range hists {
|
||||
h.Count = h.Count * 2
|
||||
|
@ -3037,9 +3050,9 @@ func TestHistogramInWALAndMmapChunk(t *testing.T) {
|
|||
app = head.Appender(context.Background())
|
||||
var hists []*histogram.FloatHistogram
|
||||
if gauge {
|
||||
hists = GenerateTestGaugeFloatHistograms(numHistograms)
|
||||
hists = tsdbutil.GenerateTestGaugeFloatHistograms(numHistograms)
|
||||
} else {
|
||||
hists = GenerateTestFloatHistograms(numHistograms)
|
||||
hists = tsdbutil.GenerateTestFloatHistograms(numHistograms)
|
||||
}
|
||||
for _, h := range hists {
|
||||
h.Count = h.Count * 2
|
||||
|
@ -3077,9 +3090,9 @@ func TestHistogramInWALAndMmapChunk(t *testing.T) {
|
|||
app = head.Appender(context.Background())
|
||||
var hists []*histogram.Histogram
|
||||
if gauge {
|
||||
hists = GenerateTestGaugeHistograms(100)
|
||||
hists = tsdbutil.GenerateTestGaugeHistograms(100)
|
||||
} else {
|
||||
hists = GenerateTestHistograms(100)
|
||||
hists = tsdbutil.GenerateTestHistograms(100)
|
||||
}
|
||||
for _, h := range hists {
|
||||
ts++
|
||||
|
@ -3114,9 +3127,9 @@ func TestHistogramInWALAndMmapChunk(t *testing.T) {
|
|||
app = head.Appender(context.Background())
|
||||
var hists []*histogram.FloatHistogram
|
||||
if gauge {
|
||||
hists = GenerateTestGaugeFloatHistograms(100)
|
||||
hists = tsdbutil.GenerateTestGaugeFloatHistograms(100)
|
||||
} else {
|
||||
hists = GenerateTestFloatHistograms(100)
|
||||
hists = tsdbutil.GenerateTestFloatHistograms(100)
|
||||
}
|
||||
for _, h := range hists {
|
||||
ts++
|
||||
|
@ -3492,14 +3505,14 @@ func TestHistogramMetrics(t *testing.T) {
|
|||
for x := 0; x < 5; x++ {
|
||||
expHSeries++
|
||||
l := labels.FromStrings("a", fmt.Sprintf("b%d", x))
|
||||
for i, h := range GenerateTestHistograms(numHistograms) {
|
||||
for i, h := range tsdbutil.GenerateTestHistograms(numHistograms) {
|
||||
app := head.Appender(context.Background())
|
||||
_, err := app.AppendHistogram(0, l, int64(i), h, nil)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, app.Commit())
|
||||
expHSamples++
|
||||
}
|
||||
for i, fh := range GenerateTestFloatHistograms(numHistograms) {
|
||||
for i, fh := range tsdbutil.GenerateTestFloatHistograms(numHistograms) {
|
||||
app := head.Appender(context.Background())
|
||||
_, err := app.AppendHistogram(0, l, int64(numHistograms+i), nil, fh)
|
||||
require.NoError(t, err)
|
||||
|
@ -3614,7 +3627,7 @@ func testHistogramStaleSampleHelper(t *testing.T, floatHistogram bool) {
|
|||
|
||||
// Adding stale in the same appender.
|
||||
app := head.Appender(context.Background())
|
||||
for _, h := range GenerateTestHistograms(numHistograms) {
|
||||
for _, h := range tsdbutil.GenerateTestHistograms(numHistograms) {
|
||||
var err error
|
||||
if floatHistogram {
|
||||
_, err = app.AppendHistogram(0, l, 100*int64(len(expHistograms)), nil, h.ToFloat())
|
||||
|
@ -3643,7 +3656,7 @@ func testHistogramStaleSampleHelper(t *testing.T, floatHistogram bool) {
|
|||
|
||||
// Adding stale in different appender and continuing series after a stale sample.
|
||||
app = head.Appender(context.Background())
|
||||
for _, h := range GenerateTestHistograms(2 * numHistograms)[numHistograms:] {
|
||||
for _, h := range tsdbutil.GenerateTestHistograms(2 * numHistograms)[numHistograms:] {
|
||||
var err error
|
||||
if floatHistogram {
|
||||
_, err = app.AppendHistogram(0, l, 100*int64(len(expHistograms)), nil, h.ToFloat())
|
||||
|
@ -3722,7 +3735,7 @@ func TestHistogramCounterResetHeader(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
h := GenerateTestHistograms(1)[0]
|
||||
h := tsdbutil.GenerateTestHistograms(1)[0]
|
||||
h.PositiveBuckets = []int64{100, 1, 1, 1}
|
||||
h.NegativeBuckets = []int64{100, 1, 1, 1}
|
||||
h.Count = 1000
|
||||
|
@ -3799,8 +3812,8 @@ func TestAppendingDifferentEncodingToSameSeries(t *testing.T) {
|
|||
})
|
||||
db.DisableCompactions()
|
||||
|
||||
hists := GenerateTestHistograms(10)
|
||||
floatHists := GenerateTestFloatHistograms(10)
|
||||
hists := tsdbutil.GenerateTestHistograms(10)
|
||||
floatHists := tsdbutil.GenerateTestFloatHistograms(10)
|
||||
lbls := labels.FromStrings("a", "b")
|
||||
|
||||
var expResult []tsdbutil.Sample
|
||||
|
@ -4428,7 +4441,7 @@ func TestHistogramValidation(t *testing.T) {
|
|||
errMsgFloat string // To be considered for float histogram only if it is non-empty.
|
||||
}{
|
||||
"valid histogram": {
|
||||
h: GenerateTestHistograms(1)[0],
|
||||
h: tsdbutil.GenerateTestHistograms(1)[0],
|
||||
},
|
||||
"rejects histogram who has too few negative buckets": {
|
||||
h: &histogram.Histogram{
|
||||
|
@ -4711,7 +4724,7 @@ func TestGaugeHistogramWALAndChunkHeader(t *testing.T) {
|
|||
require.NoError(t, app.Commit())
|
||||
}
|
||||
|
||||
hists := GenerateTestGaugeHistograms(5)
|
||||
hists := tsdbutil.GenerateTestGaugeHistograms(5)
|
||||
hists[0].CounterResetHint = histogram.UnknownCounterReset
|
||||
appendHistogram(hists[0])
|
||||
appendHistogram(hists[1])
|
||||
|
@ -4786,7 +4799,7 @@ func TestGaugeFloatHistogramWALAndChunkHeader(t *testing.T) {
|
|||
require.NoError(t, app.Commit())
|
||||
}
|
||||
|
||||
hists := GenerateTestGaugeFloatHistograms(5)
|
||||
hists := tsdbutil.GenerateTestGaugeFloatHistograms(5)
|
||||
hists[0].CounterResetHint = histogram.UnknownCounterReset
|
||||
appendHistogram(hists[0])
|
||||
appendHistogram(hists[1])
|
||||
|
@ -4843,3 +4856,58 @@ func TestGaugeFloatHistogramWALAndChunkHeader(t *testing.T) {
|
|||
|
||||
checkHeaders()
|
||||
}
|
||||
|
||||
func TestSnapshotAheadOfWALError(t *testing.T) {
|
||||
head, _ := newTestHead(t, 120*4, false, false)
|
||||
head.opts.EnableMemorySnapshotOnShutdown = true
|
||||
// Add a sample to fill WAL.
|
||||
app := head.Appender(context.Background())
|
||||
_, err := app.Append(0, labels.FromStrings("foo", "bar"), 10, 10)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, app.Commit())
|
||||
|
||||
// Increment snapshot index to create sufficiently large difference.
|
||||
for i := 0; i < 2; i++ {
|
||||
_, err = head.wal.NextSegment()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
require.NoError(t, head.Close()) // This will create a snapshot.
|
||||
|
||||
_, idx, _, err := LastChunkSnapshot(head.opts.ChunkDirRoot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 2, idx)
|
||||
|
||||
// Restart the WAL while keeping the old snapshot. The new head is created manually in this case in order
|
||||
// to keep using the same snapshot directory instead of a random one.
|
||||
require.NoError(t, os.RemoveAll(head.wal.Dir()))
|
||||
head.opts.EnableMemorySnapshotOnShutdown = false
|
||||
w, _ := wlog.NewSize(nil, nil, head.wal.Dir(), 32768, false)
|
||||
head, err = NewHead(nil, nil, w, nil, head.opts, nil)
|
||||
require.NoError(t, err)
|
||||
// Add a sample to fill WAL.
|
||||
app = head.Appender(context.Background())
|
||||
_, err = app.Append(0, labels.FromStrings("foo", "bar"), 10, 10)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, app.Commit())
|
||||
lastSegment, _, _ := w.LastSegmentAndOffset()
|
||||
require.Equal(t, 0, lastSegment)
|
||||
require.NoError(t, head.Close())
|
||||
|
||||
// New WAL is saved, but old snapshot still exists.
|
||||
_, idx, _, err = LastChunkSnapshot(head.opts.ChunkDirRoot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 2, idx)
|
||||
|
||||
// Create new Head which should detect the incorrect index and delete the snapshot.
|
||||
head.opts.EnableMemorySnapshotOnShutdown = true
|
||||
w, _ = wlog.NewSize(nil, nil, head.wal.Dir(), 32768, false)
|
||||
head, err = NewHead(nil, nil, w, nil, head.opts, nil)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, head.Init(math.MinInt64))
|
||||
|
||||
// Verify that snapshot directory does not exist anymore.
|
||||
_, _, _, err = LastChunkSnapshot(head.opts.ChunkDirRoot)
|
||||
require.Equal(t, record.ErrNotFound, err)
|
||||
|
||||
require.NoError(t, head.Close())
|
||||
}
|
||||
|
|
|
@ -18,7 +18,6 @@ import (
|
|||
"math"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
@ -65,13 +64,13 @@ func (h *Head) loadWAL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
// Start workers that each process samples for a partition of the series ID space.
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
n = runtime.GOMAXPROCS(0)
|
||||
processors = make([]walSubsetProcessor, n)
|
||||
concurrency = h.opts.WALReplayConcurrency
|
||||
processors = make([]walSubsetProcessor, concurrency)
|
||||
exemplarsInput chan record.RefExemplar
|
||||
|
||||
dec record.Decoder
|
||||
shards = make([][]record.RefSample, n)
|
||||
histogramShards = make([][]histogramRecord, n)
|
||||
shards = make([][]record.RefSample, concurrency)
|
||||
histogramShards = make([][]histogramRecord, concurrency)
|
||||
|
||||
decoded = make(chan interface{}, 10)
|
||||
decodeErr, seriesCreationErr error
|
||||
|
@ -116,7 +115,7 @@ func (h *Head) loadWAL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
// For CorruptionErr ensure to terminate all workers before exiting.
|
||||
_, ok := err.(*wlog.CorruptionErr)
|
||||
if ok || seriesCreationErr != nil {
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].closeAndDrain()
|
||||
}
|
||||
close(exemplarsInput)
|
||||
|
@ -124,8 +123,8 @@ func (h *Head) loadWAL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
}
|
||||
}()
|
||||
|
||||
wg.Add(n)
|
||||
for i := 0; i < n; i++ {
|
||||
wg.Add(concurrency)
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].setup()
|
||||
|
||||
go func(wp *walSubsetProcessor) {
|
||||
|
@ -276,7 +275,7 @@ Outer:
|
|||
multiRef[walSeries.Ref] = mSeries.ref
|
||||
}
|
||||
|
||||
idx := uint64(mSeries.ref) % uint64(n)
|
||||
idx := uint64(mSeries.ref) % uint64(concurrency)
|
||||
processors[idx].input <- walSubsetProcessorInputItem{walSeriesRef: walSeries.Ref, existingSeries: mSeries}
|
||||
}
|
||||
//nolint:staticcheck // Ignore SA6002 relax staticcheck verification.
|
||||
|
@ -293,7 +292,7 @@ Outer:
|
|||
if len(samples) < m {
|
||||
m = len(samples)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if shards[i] == nil {
|
||||
shards[i] = processors[i].reuseBuf()
|
||||
}
|
||||
|
@ -305,10 +304,10 @@ Outer:
|
|||
if r, ok := multiRef[sam.Ref]; ok {
|
||||
sam.Ref = r
|
||||
}
|
||||
mod := uint64(sam.Ref) % uint64(n)
|
||||
mod := uint64(sam.Ref) % uint64(concurrency)
|
||||
shards[mod] = append(shards[mod], sam)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if len(shards[i]) > 0 {
|
||||
processors[i].input <- walSubsetProcessorInputItem{samples: shards[i]}
|
||||
shards[i] = nil
|
||||
|
@ -351,7 +350,7 @@ Outer:
|
|||
if len(samples) < m {
|
||||
m = len(samples)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if histogramShards[i] == nil {
|
||||
histogramShards[i] = processors[i].reuseHistogramBuf()
|
||||
}
|
||||
|
@ -363,10 +362,10 @@ Outer:
|
|||
if r, ok := multiRef[sam.Ref]; ok {
|
||||
sam.Ref = r
|
||||
}
|
||||
mod := uint64(sam.Ref) % uint64(n)
|
||||
mod := uint64(sam.Ref) % uint64(concurrency)
|
||||
histogramShards[mod] = append(histogramShards[mod], histogramRecord{ref: sam.Ref, t: sam.T, h: sam.H})
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if len(histogramShards[i]) > 0 {
|
||||
processors[i].input <- walSubsetProcessorInputItem{histogramSamples: histogramShards[i]}
|
||||
histogramShards[i] = nil
|
||||
|
@ -388,7 +387,7 @@ Outer:
|
|||
if len(samples) < m {
|
||||
m = len(samples)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if histogramShards[i] == nil {
|
||||
histogramShards[i] = processors[i].reuseHistogramBuf()
|
||||
}
|
||||
|
@ -400,10 +399,10 @@ Outer:
|
|||
if r, ok := multiRef[sam.Ref]; ok {
|
||||
sam.Ref = r
|
||||
}
|
||||
mod := uint64(sam.Ref) % uint64(n)
|
||||
mod := uint64(sam.Ref) % uint64(concurrency)
|
||||
histogramShards[mod] = append(histogramShards[mod], histogramRecord{ref: sam.Ref, t: sam.T, fh: sam.FH})
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
if len(histogramShards[i]) > 0 {
|
||||
processors[i].input <- walSubsetProcessorInputItem{histogramSamples: histogramShards[i]}
|
||||
histogramShards[i] = nil
|
||||
|
@ -444,7 +443,7 @@ Outer:
|
|||
}
|
||||
|
||||
// Signal termination to each worker and wait for it to close its output channel.
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].closeAndDrain()
|
||||
}
|
||||
close(exemplarsInput)
|
||||
|
@ -687,12 +686,12 @@ func (h *Head) loadWBL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
lastSeq, lastOff := lastMmapRef.Unpack()
|
||||
// Start workers that each process samples for a partition of the series ID space.
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
n = runtime.GOMAXPROCS(0)
|
||||
processors = make([]wblSubsetProcessor, n)
|
||||
wg sync.WaitGroup
|
||||
concurrency = h.opts.WALReplayConcurrency
|
||||
processors = make([]wblSubsetProcessor, concurrency)
|
||||
|
||||
dec record.Decoder
|
||||
shards = make([][]record.RefSample, n)
|
||||
shards = make([][]record.RefSample, concurrency)
|
||||
|
||||
decodedCh = make(chan interface{}, 10)
|
||||
decodeErr error
|
||||
|
@ -714,15 +713,15 @@ func (h *Head) loadWBL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
_, ok := err.(*wlog.CorruptionErr)
|
||||
if ok {
|
||||
err = &errLoadWbl{err: err}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].closeAndDrain()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Add(n)
|
||||
for i := 0; i < n; i++ {
|
||||
wg.Add(concurrency)
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].setup()
|
||||
|
||||
go func(wp *wblSubsetProcessor) {
|
||||
|
@ -781,17 +780,17 @@ func (h *Head) loadWBL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
if len(samples) < m {
|
||||
m = len(samples)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
shards[i] = processors[i].reuseBuf()
|
||||
}
|
||||
for _, sam := range samples[:m] {
|
||||
if r, ok := multiRef[sam.Ref]; ok {
|
||||
sam.Ref = r
|
||||
}
|
||||
mod := uint64(sam.Ref) % uint64(n)
|
||||
mod := uint64(sam.Ref) % uint64(concurrency)
|
||||
shards[mod] = append(shards[mod], sam)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].input <- shards[i]
|
||||
}
|
||||
samples = samples[m:]
|
||||
|
@ -818,7 +817,7 @@ func (h *Head) loadWBL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
mmapMarkerUnknownRefs.Inc()
|
||||
continue
|
||||
}
|
||||
idx := uint64(ms.ref) % uint64(n)
|
||||
idx := uint64(ms.ref) % uint64(concurrency)
|
||||
// It is possible that some old sample is being processed in processWALSamples that
|
||||
// could cause race below. So we wait for the goroutine to empty input the buffer and finish
|
||||
// processing all old samples after emptying the buffer.
|
||||
|
@ -847,7 +846,7 @@ func (h *Head) loadWBL(r *wlog.Reader, multiRef map[chunks.HeadSeriesRef]chunks.
|
|||
}
|
||||
|
||||
// Signal termination to each worker and wait for it to close its output channel.
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
processors[i].closeAndDrain()
|
||||
}
|
||||
wg.Wait()
|
||||
|
@ -1383,18 +1382,18 @@ func (h *Head) loadChunkSnapshot() (int, int, map[chunks.HeadSeriesRef]*memSerie
|
|||
var (
|
||||
numSeries = 0
|
||||
unknownRefs = int64(0)
|
||||
n = runtime.GOMAXPROCS(0)
|
||||
concurrency = h.opts.WALReplayConcurrency
|
||||
wg sync.WaitGroup
|
||||
recordChan = make(chan chunkSnapshotRecord, 5*n)
|
||||
shardedRefSeries = make([]map[chunks.HeadSeriesRef]*memSeries, n)
|
||||
errChan = make(chan error, n)
|
||||
recordChan = make(chan chunkSnapshotRecord, 5*concurrency)
|
||||
shardedRefSeries = make([]map[chunks.HeadSeriesRef]*memSeries, concurrency)
|
||||
errChan = make(chan error, concurrency)
|
||||
refSeries map[chunks.HeadSeriesRef]*memSeries
|
||||
exemplarBuf []record.RefExemplar
|
||||
dec record.Decoder
|
||||
)
|
||||
|
||||
wg.Add(n)
|
||||
for i := 0; i < n; i++ {
|
||||
wg.Add(concurrency)
|
||||
for i := 0; i < concurrency; i++ {
|
||||
go func(idx int, rc <-chan chunkSnapshotRecord) {
|
||||
defer wg.Done()
|
||||
defer func() {
|
||||
|
|
0
tsdb/test.txt
Normal file
0
tsdb/test.txt
Normal file
|
@ -1,58 +0,0 @@
|
|||
// Copyright 2017 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import "testing"
|
||||
|
||||
func BenchmarkMapConversion(b *testing.B) {
|
||||
type key string
|
||||
type val string
|
||||
|
||||
m := map[key]val{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
}
|
||||
|
||||
var sm map[string]string
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
sm = make(map[string]string, len(m))
|
||||
for k, v := range m {
|
||||
sm[string(k)] = string(v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkListIter(b *testing.B) {
|
||||
var list []uint32
|
||||
for i := 0; i < 1e4; i++ {
|
||||
list = append(list, uint32(i))
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
total := uint32(0)
|
||||
|
||||
for j := 0; j < b.N; j++ {
|
||||
sum := uint32(0)
|
||||
for _, k := range list {
|
||||
sum += k
|
||||
}
|
||||
total += sum
|
||||
}
|
||||
}
|
|
@ -1,123 +0,0 @@
|
|||
// Copyright 2017 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"testing"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
sip13 "github.com/dgryski/go-sip13"
|
||||
)
|
||||
|
||||
type pair struct {
|
||||
name, value string
|
||||
}
|
||||
|
||||
var testInput = []pair{
|
||||
{"job", "node"},
|
||||
{"instance", "123.123.1.211:9090"},
|
||||
{"path", "/api/v1/namespaces/<namespace>/deployments/<name>"},
|
||||
{"method", "GET"},
|
||||
{"namespace", "system"},
|
||||
{"status", "500"},
|
||||
}
|
||||
|
||||
func BenchmarkHash(b *testing.B) {
|
||||
input := []byte{}
|
||||
for _, v := range testInput {
|
||||
input = append(input, v.name...)
|
||||
input = append(input, '\xff')
|
||||
input = append(input, v.value...)
|
||||
input = append(input, '\xff')
|
||||
}
|
||||
|
||||
var total uint64
|
||||
|
||||
var k0 uint64 = 0x0706050403020100
|
||||
var k1 uint64 = 0x0f0e0d0c0b0a0908
|
||||
|
||||
for name, f := range map[string]func(b []byte) uint64{
|
||||
"xxhash": xxhash.Sum64,
|
||||
"fnv64": fnv64a,
|
||||
"sip13": func(b []byte) uint64 { return sip13.Sum64(k0, k1, b) },
|
||||
} {
|
||||
b.Run(name, func(b *testing.B) {
|
||||
b.SetBytes(int64(len(input)))
|
||||
total = 0
|
||||
for i := 0; i < b.N; i++ {
|
||||
total += f(input)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// hashAdd adds a string to a fnv64a hash value, returning the updated hash.
|
||||
func fnv64a(b []byte) uint64 {
|
||||
const (
|
||||
offset64 = 14695981039346656037
|
||||
prime64 = 1099511628211
|
||||
)
|
||||
|
||||
h := uint64(offset64)
|
||||
for x := range b {
|
||||
h ^= uint64(x)
|
||||
h *= prime64
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func BenchmarkCRC32_diff(b *testing.B) {
|
||||
data := [][]byte{}
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
b := make([]byte, 512)
|
||||
rand.Read(b)
|
||||
data = append(data, b)
|
||||
}
|
||||
|
||||
ctab := crc32.MakeTable(crc32.Castagnoli)
|
||||
total := uint32(0)
|
||||
|
||||
b.Run("direct", func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
total += crc32.Checksum(data[i%1000], ctab)
|
||||
}
|
||||
})
|
||||
b.Run("hash-reuse", func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
h := crc32.New(ctab)
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
h.Reset()
|
||||
h.Write(data[i%1000])
|
||||
total += h.Sum32()
|
||||
}
|
||||
})
|
||||
b.Run("hash-new", func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
h := crc32.New(ctab)
|
||||
h.Write(data[i%1000])
|
||||
total += h.Sum32()
|
||||
}
|
||||
})
|
||||
|
||||
fmt.Println(total)
|
||||
}
|
|
@ -1,214 +0,0 @@
|
|||
// Copyright 2017 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
"github.com/prometheus/prometheus/model/labels"
|
||||
)
|
||||
|
||||
func BenchmarkMapClone(b *testing.B) {
|
||||
m := map[string]string{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
"prometheus": "prometheus-core-1",
|
||||
"datacenter": "eu-west-1",
|
||||
"pod_name": "abcdef-99999-defee",
|
||||
}
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
res := make(map[string]string, len(m))
|
||||
for k, v := range m {
|
||||
res[k] = v
|
||||
}
|
||||
m = res
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLabelsClone(b *testing.B) {
|
||||
m := map[string]string{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
"prometheus": "prometheus-core-1",
|
||||
"datacenter": "eu-west-1",
|
||||
"pod_name": "abcdef-99999-defee",
|
||||
}
|
||||
l := labels.FromMap(m)
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
l = l.Copy()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLabelMapAccess(b *testing.B) {
|
||||
m := map[string]string{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
"prometheus": "prometheus-core-1",
|
||||
"datacenter": "eu-west-1",
|
||||
"pod_name": "abcdef-99999-defee",
|
||||
}
|
||||
|
||||
var v string
|
||||
|
||||
for k := range m {
|
||||
b.Run(k, func(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
v = m[k]
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
_ = v
|
||||
}
|
||||
|
||||
func BenchmarkLabelSetAccess(b *testing.B) {
|
||||
m := map[string]string{
|
||||
"job": "node",
|
||||
"instance": "123.123.1.211:9090",
|
||||
"path": "/api/v1/namespaces/<namespace>/deployments/<name>",
|
||||
"method": "GET",
|
||||
"namespace": "system",
|
||||
"status": "500",
|
||||
"prometheus": "prometheus-core-1",
|
||||
"datacenter": "eu-west-1",
|
||||
"pod_name": "abcdef-99999-defee",
|
||||
}
|
||||
ls := labels.FromMap(m)
|
||||
|
||||
var v string
|
||||
|
||||
ls.Range(func(l labels.Label) {
|
||||
b.Run(l.Name, func(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
v = ls.Get(l.Name)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
_ = v
|
||||
}
|
||||
|
||||
func BenchmarkStringBytesEquals(b *testing.B) {
|
||||
randBytes := func(n int) ([]byte, []byte) {
|
||||
buf1 := make([]byte, n)
|
||||
if _, err := rand.Read(buf1); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
buf2 := make([]byte, n)
|
||||
copy(buf1, buf2)
|
||||
|
||||
return buf1, buf2
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
f func() ([]byte, []byte)
|
||||
}{
|
||||
{
|
||||
name: "equal",
|
||||
f: func() ([]byte, []byte) {
|
||||
return randBytes(60)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1-flip-end",
|
||||
f: func() ([]byte, []byte) {
|
||||
b1, b2 := randBytes(60)
|
||||
b2[59] ^= b2[59]
|
||||
return b1, b2
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1-flip-middle",
|
||||
f: func() ([]byte, []byte) {
|
||||
b1, b2 := randBytes(60)
|
||||
b2[29] ^= b2[29]
|
||||
return b1, b2
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1-flip-start",
|
||||
f: func() ([]byte, []byte) {
|
||||
b1, b2 := randBytes(60)
|
||||
b2[0] ^= b2[0]
|
||||
return b1, b2
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "different-length",
|
||||
f: func() ([]byte, []byte) {
|
||||
b1, b2 := randBytes(60)
|
||||
return b1, b2[:59]
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
b.Run(c.name+"-strings", func(b *testing.B) {
|
||||
ab, bb := c.f()
|
||||
as, bs := string(ab), string(bb)
|
||||
b.SetBytes(int64(len(as)))
|
||||
|
||||
var r bool
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = as == bs
|
||||
}
|
||||
_ = r
|
||||
})
|
||||
|
||||
b.Run(c.name+"-bytes", func(b *testing.B) {
|
||||
ab, bb := c.f()
|
||||
b.SetBytes(int64(len(ab)))
|
||||
|
||||
var r bool
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = bytes.Equal(ab, bb)
|
||||
}
|
||||
_ = r
|
||||
})
|
||||
|
||||
b.Run(c.name+"-bytes-length-check", func(b *testing.B) {
|
||||
ab, bb := c.f()
|
||||
b.SetBytes(int64(len(ab)))
|
||||
|
||||
var r bool
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if len(ab) != len(bb) {
|
||||
continue
|
||||
}
|
||||
r = bytes.Equal(ab, bb)
|
||||
}
|
||||
_ = r
|
||||
})
|
||||
}
|
||||
}
|
|
@ -72,6 +72,8 @@ func ChunkFromSamplesGeneric(s Samples) chunks.Meta {
|
|||
ca.Append(s.Get(i).T(), s.Get(i).V())
|
||||
case chunkenc.ValHistogram:
|
||||
ca.AppendHistogram(s.Get(i).T(), s.Get(i).H())
|
||||
case chunkenc.ValFloatHistogram:
|
||||
ca.AppendFloatHistogram(s.Get(i).T(), s.Get(i).FH())
|
||||
default:
|
||||
panic(fmt.Sprintf("unknown sample type %s", sampleType.String()))
|
||||
}
|
||||
|
@ -128,12 +130,18 @@ func PopulatedChunk(numSamples int, minTime int64) chunks.Meta {
|
|||
|
||||
// GenerateSamples starting at start and counting up numSamples.
|
||||
func GenerateSamples(start, numSamples int) []Sample {
|
||||
samples := make([]Sample, 0, numSamples)
|
||||
for i := start; i < start+numSamples; i++ {
|
||||
samples = append(samples, sample{
|
||||
return generateSamples(start, numSamples, func(i int) Sample {
|
||||
return sample{
|
||||
t: int64(i),
|
||||
v: float64(i),
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func generateSamples(start, numSamples int, gen func(int) Sample) []Sample {
|
||||
samples := make([]Sample, 0, numSamples)
|
||||
for i := start; i < start+numSamples; i++ {
|
||||
samples = append(samples, gen(i))
|
||||
}
|
||||
return samples
|
||||
}
|
||||
|
|
110
tsdb/tsdbutil/histogram.go
Normal file
110
tsdb/tsdbutil/histogram.go
Normal file
|
@ -0,0 +1,110 @@
|
|||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package tsdbutil
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
|
||||
"github.com/prometheus/prometheus/model/histogram"
|
||||
)
|
||||
|
||||
func GenerateTestHistograms(n int) (r []*histogram.Histogram) {
|
||||
for i := 0; i < n; i++ {
|
||||
h := GenerateTestHistogram(i)
|
||||
if i > 0 {
|
||||
h.CounterResetHint = histogram.NotCounterReset
|
||||
}
|
||||
r = append(r, h)
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// GenerateTestHistogram but it is up to the user to set any known counter reset hint.
|
||||
func GenerateTestHistogram(i int) *histogram.Histogram {
|
||||
return &histogram.Histogram{
|
||||
Count: 10 + uint64(i*8),
|
||||
ZeroCount: 2 + uint64(i),
|
||||
ZeroThreshold: 0.001,
|
||||
Sum: 18.4 * float64(i+1),
|
||||
Schema: 1,
|
||||
PositiveSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
PositiveBuckets: []int64{int64(i + 1), 1, -1, 0},
|
||||
NegativeSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
NegativeBuckets: []int64{int64(i + 1), 1, -1, 0},
|
||||
}
|
||||
}
|
||||
|
||||
func GenerateTestGaugeHistograms(n int) (r []*histogram.Histogram) {
|
||||
for x := 0; x < n; x++ {
|
||||
r = append(r, GenerateTestGaugeHistogram(rand.Intn(n)))
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
func GenerateTestGaugeHistogram(i int) *histogram.Histogram {
|
||||
h := GenerateTestHistogram(i)
|
||||
h.CounterResetHint = histogram.GaugeType
|
||||
return h
|
||||
}
|
||||
|
||||
func GenerateTestFloatHistograms(n int) (r []*histogram.FloatHistogram) {
|
||||
for i := 0; i < n; i++ {
|
||||
h := GenerateTestFloatHistogram(i)
|
||||
if i > 0 {
|
||||
h.CounterResetHint = histogram.NotCounterReset
|
||||
}
|
||||
r = append(r, h)
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// GenerateTestFloatHistogram but it is up to the user to set any known counter reset hint.
|
||||
func GenerateTestFloatHistogram(i int) *histogram.FloatHistogram {
|
||||
return &histogram.FloatHistogram{
|
||||
Count: 10 + float64(i*8),
|
||||
ZeroCount: 2 + float64(i),
|
||||
ZeroThreshold: 0.001,
|
||||
Sum: 18.4 * float64(i+1),
|
||||
Schema: 1,
|
||||
PositiveSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
PositiveBuckets: []float64{float64(i + 1), float64(i + 2), float64(i + 1), float64(i + 1)},
|
||||
NegativeSpans: []histogram.Span{
|
||||
{Offset: 0, Length: 2},
|
||||
{Offset: 1, Length: 2},
|
||||
},
|
||||
NegativeBuckets: []float64{float64(i + 1), float64(i + 2), float64(i + 1), float64(i + 1)},
|
||||
}
|
||||
}
|
||||
|
||||
func GenerateTestGaugeFloatHistograms(n int) (r []*histogram.FloatHistogram) {
|
||||
for x := 0; x < n; x++ {
|
||||
r = append(r, GenerateTestGaugeFloatHistogram(rand.Intn(n)))
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
func GenerateTestGaugeFloatHistogram(i int) *histogram.FloatHistogram {
|
||||
h := GenerateTestFloatHistogram(i)
|
||||
h.CounterResetHint = histogram.GaugeType
|
||||
return h
|
||||
}
|
|
@ -199,9 +199,10 @@ type wlMetrics struct {
|
|||
truncateTotal prometheus.Counter
|
||||
currentSegment prometheus.Gauge
|
||||
writesFailed prometheus.Counter
|
||||
walFileSize prometheus.GaugeFunc
|
||||
}
|
||||
|
||||
func newWLMetrics(r prometheus.Registerer) *wlMetrics {
|
||||
func newWLMetrics(w *WL, r prometheus.Registerer) *wlMetrics {
|
||||
m := &wlMetrics{}
|
||||
|
||||
m.fsyncDuration = prometheus.NewSummary(prometheus.SummaryOpts{
|
||||
|
@ -233,6 +234,17 @@ func newWLMetrics(r prometheus.Registerer) *wlMetrics {
|
|||
Name: "writes_failed_total",
|
||||
Help: "Total number of write log writes that failed.",
|
||||
})
|
||||
m.walFileSize = prometheus.NewGaugeFunc(prometheus.GaugeOpts{
|
||||
Name: "storage_size_bytes",
|
||||
Help: "Size of the write log directory.",
|
||||
}, func() float64 {
|
||||
val, err := w.Size()
|
||||
if err != nil {
|
||||
level.Error(w.logger).Log("msg", "Failed to calculate size of \"wal\" dir",
|
||||
"err", err.Error())
|
||||
}
|
||||
return float64(val)
|
||||
})
|
||||
|
||||
if r != nil {
|
||||
r.MustRegister(
|
||||
|
@ -243,6 +255,7 @@ func newWLMetrics(r prometheus.Registerer) *wlMetrics {
|
|||
m.truncateTotal,
|
||||
m.currentSegment,
|
||||
m.writesFailed,
|
||||
m.walFileSize,
|
||||
)
|
||||
}
|
||||
|
||||
|
@ -279,7 +292,7 @@ func NewSize(logger log.Logger, reg prometheus.Registerer, dir string, segmentSi
|
|||
if filepath.Base(dir) == WblDirName {
|
||||
prefix = "prometheus_tsdb_out_of_order_wbl_"
|
||||
}
|
||||
w.metrics = newWLMetrics(prometheus.WrapRegistererWithPrefix(prefix, reg))
|
||||
w.metrics = newWLMetrics(w, prometheus.WrapRegistererWithPrefix(prefix, reg))
|
||||
|
||||
_, last, err := Segments(w.Dir())
|
||||
if err != nil {
|
||||
|
|
|
@ -16,6 +16,7 @@ package strutil
|
|||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/grafana/regexp"
|
||||
)
|
||||
|
@ -38,6 +39,26 @@ func GraphLinkForExpression(expr string) string {
|
|||
|
||||
// SanitizeLabelName replaces anything that doesn't match
|
||||
// client_label.LabelNameRE with an underscore.
|
||||
// Note: this does not handle all Prometheus label name restrictions (such as
|
||||
// not starting with a digit 0-9), and hence should only be used if the label
|
||||
// name is prefixed with a known valid string.
|
||||
func SanitizeLabelName(name string) string {
|
||||
return invalidLabelCharRE.ReplaceAllString(name, "_")
|
||||
}
|
||||
|
||||
// SanitizeFullLabelName replaces any invalid character with an underscore, and
|
||||
// if given an empty string, returns a string containing a single underscore.
|
||||
func SanitizeFullLabelName(name string) string {
|
||||
if len(name) == 0 {
|
||||
return "_"
|
||||
}
|
||||
var validSb strings.Builder
|
||||
for i, b := range name {
|
||||
if !((b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || b == '_' || (b >= '0' && b <= '9' && i > 0)) {
|
||||
validSb.WriteRune('_')
|
||||
} else {
|
||||
validSb.WriteRune(b)
|
||||
}
|
||||
}
|
||||
return validSb.String()
|
||||
}
|
||||
|
|
|
@ -59,3 +59,21 @@ func TestSanitizeLabelName(t *testing.T) {
|
|||
expected = "barClient_LABEL____"
|
||||
require.Equal(t, expected, actual, "SanitizeLabelName failed for label (%s)", expected)
|
||||
}
|
||||
|
||||
func TestSanitizeFullLabelName(t *testing.T) {
|
||||
actual := SanitizeFullLabelName("fooClientLABEL")
|
||||
expected := "fooClientLABEL"
|
||||
require.Equal(t, expected, actual, "SanitizeFullLabelName failed for label (%s)", expected)
|
||||
|
||||
actual = SanitizeFullLabelName("barClient.LABEL$$##")
|
||||
expected = "barClient_LABEL____"
|
||||
require.Equal(t, expected, actual, "SanitizeFullLabelName failed for label (%s)", expected)
|
||||
|
||||
actual = SanitizeFullLabelName("0zerothClient1LABEL")
|
||||
expected = "_zerothClient1LABEL"
|
||||
require.Equal(t, expected, actual, "SanitizeFullLabelName failed for label (%s)", expected)
|
||||
|
||||
actual = SanitizeFullLabelName("")
|
||||
expected = "_"
|
||||
require.Equal(t, expected, actual, "SanitizeFullLabelName failed for the empty label")
|
||||
}
|
||||
|
|
|
@ -421,7 +421,10 @@ func (api *API) query(r *http.Request) (result apiFuncResult) {
|
|||
defer cancel()
|
||||
}
|
||||
|
||||
opts := extractQueryOpts(r)
|
||||
opts, err := extractQueryOpts(r)
|
||||
if err != nil {
|
||||
return apiFuncResult{nil, &apiError{errorBadData, err}, nil, nil}
|
||||
}
|
||||
qry, err := api.QueryEngine.NewInstantQuery(api.Queryable, opts, r.FormValue("query"), ts)
|
||||
if err != nil {
|
||||
return invalidParamError(err, "query")
|
||||
|
@ -466,10 +469,18 @@ func (api *API) formatQuery(r *http.Request) (result apiFuncResult) {
|
|||
return apiFuncResult{expr.Pretty(0), nil, nil, nil}
|
||||
}
|
||||
|
||||
func extractQueryOpts(r *http.Request) *promql.QueryOpts {
|
||||
return &promql.QueryOpts{
|
||||
func extractQueryOpts(r *http.Request) (*promql.QueryOpts, error) {
|
||||
opts := &promql.QueryOpts{
|
||||
EnablePerStepStats: r.FormValue("stats") == "all",
|
||||
}
|
||||
if strDuration := r.FormValue("lookback_delta"); strDuration != "" {
|
||||
duration, err := parseDuration(strDuration)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing lookback delta duration: %w", err)
|
||||
}
|
||||
opts.LookbackDelta = duration
|
||||
}
|
||||
return opts, nil
|
||||
}
|
||||
|
||||
func (api *API) queryRange(r *http.Request) (result apiFuncResult) {
|
||||
|
@ -513,7 +524,10 @@ func (api *API) queryRange(r *http.Request) (result apiFuncResult) {
|
|||
defer cancel()
|
||||
}
|
||||
|
||||
opts := extractQueryOpts(r)
|
||||
opts, err := extractQueryOpts(r)
|
||||
if err != nil {
|
||||
return apiFuncResult{nil, &apiError{errorBadData, err}, nil, nil}
|
||||
}
|
||||
qry, err := api.QueryEngine.NewRangeQuery(api.Queryable, opts, r.FormValue("query"), start, end, step)
|
||||
if err != nil {
|
||||
return apiFuncResult{nil, &apiError{errorBadData, err}, nil, nil}
|
||||
|
|
|
@ -3352,3 +3352,66 @@ func (t *testCodec) CanEncode(_ *Response) bool {
|
|||
func (t *testCodec) Encode(_ *Response) ([]byte, error) {
|
||||
return []byte(fmt.Sprintf("response from %v codec", t.contentType)), nil
|
||||
}
|
||||
|
||||
func TestExtractQueryOpts(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
form url.Values
|
||||
expect *promql.QueryOpts
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "with stats all",
|
||||
form: url.Values{
|
||||
"stats": []string{"all"},
|
||||
},
|
||||
expect: &promql.QueryOpts{
|
||||
EnablePerStepStats: true,
|
||||
},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
name: "with stats none",
|
||||
form: url.Values{
|
||||
"stats": []string{"none"},
|
||||
},
|
||||
expect: &promql.QueryOpts{
|
||||
EnablePerStepStats: false,
|
||||
},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
name: "with lookback delta",
|
||||
form: url.Values{
|
||||
"stats": []string{"all"},
|
||||
"lookback_delta": []string{"30s"},
|
||||
},
|
||||
expect: &promql.QueryOpts{
|
||||
EnablePerStepStats: true,
|
||||
LookbackDelta: 30 * time.Second,
|
||||
},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
name: "with invalid lookback delta",
|
||||
form: url.Values{
|
||||
"lookback_delta": []string{"invalid"},
|
||||
},
|
||||
expect: nil,
|
||||
err: errors.New(`error parsing lookback delta duration: cannot parse "invalid" to a valid duration`),
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
req := &http.Request{Form: test.form}
|
||||
opts, err := extractQueryOpts(req)
|
||||
require.Equal(t, test.expect, opts)
|
||||
if test.err == nil {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.Equal(t, test.err.Error(), err.Error())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@prometheus-io/codemirror-promql",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"description": "a CodeMirror mode for the PromQL language",
|
||||
"types": "dist/esm/index.d.ts",
|
||||
"module": "dist/esm/index.js",
|
||||
|
@ -29,7 +29,7 @@
|
|||
},
|
||||
"homepage": "https://github.com/prometheus/prometheus/blob/main/web/ui/module/codemirror-promql/README.md",
|
||||
"dependencies": {
|
||||
"@prometheus-io/lezer-promql": "^0.41.0",
|
||||
"@prometheus-io/lezer-promql": "^0.42.0",
|
||||
"lru-cache": "^6.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
|
|
@ -231,7 +231,19 @@ describe('analyzeCompletion test', () => {
|
|||
title: 'starting to autocomplete labelName in aggregate modifier',
|
||||
expr: 'sum by ()',
|
||||
pos: 8, // cursor is between the bracket
|
||||
expectedContext: [{ kind: ContextKind.LabelName }],
|
||||
expectedContext: [{ kind: ContextKind.LabelName, metricName: '' }],
|
||||
},
|
||||
{
|
||||
title: 'starting to autocomplete labelName in aggregate modifier with metric name',
|
||||
expr: 'sum(up) by ()',
|
||||
pos: 12, // cursor is between ()
|
||||
expectedContext: [{ kind: ContextKind.LabelName, metricName: 'up' }],
|
||||
},
|
||||
{
|
||||
title: 'starting to autocomplete labelName in aggregate modifier with metric name in front',
|
||||
expr: 'sum by ()(up)',
|
||||
pos: 8, // cursor is between ()
|
||||
expectedContext: [{ kind: ContextKind.LabelName, metricName: 'up' }],
|
||||
},
|
||||
{
|
||||
title: 'continue to autocomplete labelName in aggregate modifier',
|
||||
|
@ -243,7 +255,7 @@ describe('analyzeCompletion test', () => {
|
|||
title: 'autocomplete labelName in a list',
|
||||
expr: 'sum by (myLabel1,)',
|
||||
pos: 17, // cursor is between the bracket after the string myLab
|
||||
expectedContext: [{ kind: ContextKind.LabelName }],
|
||||
expectedContext: [{ kind: ContextKind.LabelName, metricName: '' }],
|
||||
},
|
||||
{
|
||||
title: 'autocomplete labelName in a list 2',
|
||||
|
|
|
@ -110,6 +110,26 @@ export interface Context {
|
|||
matchers?: Matcher[];
|
||||
}
|
||||
|
||||
function getMetricNameInGroupBy(tree: SyntaxNode, state: EditorState): string {
|
||||
// There should be an AggregateExpr as parent of the GroupingLabels.
|
||||
// Then we should find the VectorSelector child to be able to find the metric name.
|
||||
const currentNode: SyntaxNode | null = walkBackward(tree, AggregateExpr);
|
||||
if (!currentNode) {
|
||||
return '';
|
||||
}
|
||||
let metricName = '';
|
||||
currentNode.cursor().iterate((node) => {
|
||||
// Continue until we find the VectorSelector, then look up the metric name.
|
||||
if (node.type.id === VectorSelector) {
|
||||
metricName = getMetricNameInVectorSelector(node.node, state);
|
||||
if (metricName) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
});
|
||||
return metricName;
|
||||
}
|
||||
|
||||
function getMetricNameInVectorSelector(tree: SyntaxNode, state: EditorState): string {
|
||||
// Find if there is a defined metric name. Should be used to autocomplete a labelValue or a labelName
|
||||
// First find the parent "VectorSelector" to be able to find then the subChild "Identifier" if it exists.
|
||||
|
@ -344,9 +364,9 @@ export function analyzeCompletion(state: EditorState, node: SyntaxNode): Context
|
|||
break;
|
||||
case GroupingLabels:
|
||||
// In this case we are in the given situation:
|
||||
// sum by ()
|
||||
// So we have to autocomplete any labelName
|
||||
result.push({ kind: ContextKind.LabelName });
|
||||
// sum by () or sum (metric_name) by ()
|
||||
// so we have or to autocomplete any kind of labelName or to autocomplete only the labelName associated to the metric
|
||||
result.push({ kind: ContextKind.LabelName, metricName: getMetricNameInGroupBy(node, state) });
|
||||
break;
|
||||
case LabelMatchers:
|
||||
// In that case we are in the given situation:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@prometheus-io/lezer-promql",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"description": "lezer-based PromQL grammar",
|
||||
"main": "dist/index.cjs",
|
||||
"type": "module",
|
||||
|
|
14
web/ui/package-lock.json
generated
14
web/ui/package-lock.json
generated
|
@ -28,10 +28,10 @@
|
|||
},
|
||||
"module/codemirror-promql": {
|
||||
"name": "@prometheus-io/codemirror-promql",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@prometheus-io/lezer-promql": "^0.41.0",
|
||||
"@prometheus-io/lezer-promql": "^0.42.0",
|
||||
"lru-cache": "^6.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
@ -61,7 +61,7 @@
|
|||
},
|
||||
"module/lezer-promql": {
|
||||
"name": "@prometheus-io/lezer-promql",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"license": "Apache-2.0",
|
||||
"devDependencies": {
|
||||
"@lezer/generator": "^1.2.2",
|
||||
|
@ -20763,7 +20763,7 @@
|
|||
},
|
||||
"react-app": {
|
||||
"name": "@prometheus-io/app",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.4.0",
|
||||
"@codemirror/commands": "^6.2.0",
|
||||
|
@ -20781,7 +20781,7 @@
|
|||
"@lezer/lr": "^1.3.1",
|
||||
"@nexucis/fuzzy": "^0.4.1",
|
||||
"@nexucis/kvsearch": "^0.8.1",
|
||||
"@prometheus-io/codemirror-promql": "^0.41.0",
|
||||
"@prometheus-io/codemirror-promql": "^0.42.0",
|
||||
"bootstrap": "^4.6.2",
|
||||
"css.escape": "^1.5.1",
|
||||
"downshift": "^7.2.0",
|
||||
|
@ -23417,7 +23417,7 @@
|
|||
"@lezer/lr": "^1.3.1",
|
||||
"@nexucis/fuzzy": "^0.4.1",
|
||||
"@nexucis/kvsearch": "^0.8.1",
|
||||
"@prometheus-io/codemirror-promql": "^0.41.0",
|
||||
"@prometheus-io/codemirror-promql": "^0.42.0",
|
||||
"@testing-library/react-hooks": "^7.0.2",
|
||||
"@types/enzyme": "^3.10.12",
|
||||
"@types/flot": "0.0.32",
|
||||
|
@ -23468,7 +23468,7 @@
|
|||
"@lezer/common": "^1.0.2",
|
||||
"@lezer/highlight": "^1.1.3",
|
||||
"@lezer/lr": "^1.3.1",
|
||||
"@prometheus-io/lezer-promql": "^0.41.0",
|
||||
"@prometheus-io/lezer-promql": "^0.42.0",
|
||||
"@types/lru-cache": "^5.1.1",
|
||||
"isomorphic-fetch": "^3.0.0",
|
||||
"lru-cache": "^6.0.0",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@prometheus-io/app",
|
||||
"version": "0.41.0",
|
||||
"version": "0.42.0",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.4.0",
|
||||
|
@ -19,7 +19,7 @@
|
|||
"@lezer/common": "^1.0.2",
|
||||
"@nexucis/fuzzy": "^0.4.1",
|
||||
"@nexucis/kvsearch": "^0.8.1",
|
||||
"@prometheus-io/codemirror-promql": "^0.41.0",
|
||||
"@prometheus-io/codemirror-promql": "^0.42.0",
|
||||
"bootstrap": "^4.6.2",
|
||||
"css.escape": "^1.5.1",
|
||||
"downshift": "^7.2.0",
|
||||
|
|
|
@ -6,6 +6,7 @@ import { BrowserRouter as Router, Redirect, Route, Switch } from 'react-router-d
|
|||
import { PathPrefixContext } from './contexts/PathPrefixContext';
|
||||
import { ThemeContext, themeName, themeSetting } from './contexts/ThemeContext';
|
||||
import { ReadyContext } from './contexts/ReadyContext';
|
||||
import { AnimateLogoContext } from './contexts/AnimateLogoContext';
|
||||
import { useLocalStorage } from './hooks/useLocalStorage';
|
||||
import useMedia from './hooks/useMedia';
|
||||
import {
|
||||
|
@ -60,6 +61,7 @@ const App: FC<AppProps> = ({ consolesLink, agentMode, ready }) => {
|
|||
const [userTheme, setUserTheme] = useLocalStorage<themeSetting>(themeLocalStorageKey, 'auto');
|
||||
const browserHasThemes = useMedia('(prefers-color-scheme)');
|
||||
const browserWantsDarkTheme = useMedia('(prefers-color-scheme: dark)');
|
||||
const [animateLogo, setAnimateLogo] = useLocalStorage<boolean>('animateLogo', false);
|
||||
|
||||
let theme: themeName;
|
||||
if (userTheme !== 'auto') {
|
||||
|
@ -76,46 +78,48 @@ const App: FC<AppProps> = ({ consolesLink, agentMode, ready }) => {
|
|||
<PathPrefixContext.Provider value={basePath}>
|
||||
<ReadyContext.Provider value={ready}>
|
||||
<Router basename={basePath}>
|
||||
<Navigation consolesLink={consolesLink} agentMode={agentMode} />
|
||||
<Container fluid style={{ paddingTop: 70 }}>
|
||||
<Switch>
|
||||
<Redirect exact from="/" to={agentMode ? '/agent' : '/graph'} />
|
||||
{/*
|
||||
<AnimateLogoContext.Provider value={animateLogo}>
|
||||
<Navigation consolesLink={consolesLink} agentMode={agentMode} animateLogo={animateLogo} />
|
||||
<Container fluid style={{ paddingTop: 70 }}>
|
||||
<Switch>
|
||||
<Redirect exact from="/" to={agentMode ? '/agent' : '/graph'} />
|
||||
{/*
|
||||
NOTE: Any route added here needs to also be added to the list of
|
||||
React-handled router paths ("reactRouterPaths") in /web/web.go.
|
||||
*/}
|
||||
<Route path="/agent">
|
||||
<AgentPage />
|
||||
</Route>
|
||||
<Route path="/graph">
|
||||
<PanelListPage />
|
||||
</Route>
|
||||
<Route path="/alerts">
|
||||
<AlertsPage />
|
||||
</Route>
|
||||
<Route path="/config">
|
||||
<ConfigPage />
|
||||
</Route>
|
||||
<Route path="/flags">
|
||||
<FlagsPage />
|
||||
</Route>
|
||||
<Route path="/rules">
|
||||
<RulesPage />
|
||||
</Route>
|
||||
<Route path="/service-discovery">
|
||||
<ServiceDiscoveryPage />
|
||||
</Route>
|
||||
<Route path="/status">
|
||||
<StatusPage agentMode={agentMode} />
|
||||
</Route>
|
||||
<Route path="/tsdb-status">
|
||||
<TSDBStatusPage />
|
||||
</Route>
|
||||
<Route path="/targets">
|
||||
<TargetsPage />
|
||||
</Route>
|
||||
</Switch>
|
||||
</Container>
|
||||
<Route path="/agent">
|
||||
<AgentPage />
|
||||
</Route>
|
||||
<Route path="/graph">
|
||||
<PanelListPage />
|
||||
</Route>
|
||||
<Route path="/alerts">
|
||||
<AlertsPage />
|
||||
</Route>
|
||||
<Route path="/config">
|
||||
<ConfigPage />
|
||||
</Route>
|
||||
<Route path="/flags">
|
||||
<FlagsPage />
|
||||
</Route>
|
||||
<Route path="/rules">
|
||||
<RulesPage />
|
||||
</Route>
|
||||
<Route path="/service-discovery">
|
||||
<ServiceDiscoveryPage />
|
||||
</Route>
|
||||
<Route path="/status">
|
||||
<StatusPage agentMode={agentMode} setAnimateLogo={setAnimateLogo} />
|
||||
</Route>
|
||||
<Route path="/tsdb-status">
|
||||
<TSDBStatusPage />
|
||||
</Route>
|
||||
<Route path="/targets">
|
||||
<TargetsPage />
|
||||
</Route>
|
||||
</Switch>
|
||||
</Container>
|
||||
</AnimateLogoContext.Provider>
|
||||
</Router>
|
||||
</ReadyContext.Provider>
|
||||
</PathPrefixContext.Provider>
|
||||
|
|
|
@ -13,21 +13,22 @@ import {
|
|||
DropdownItem,
|
||||
} from 'reactstrap';
|
||||
import { ThemeToggle } from './Theme';
|
||||
import logo from './images/prometheus_logo_grey.svg';
|
||||
import { ReactComponent as PromLogo } from './images/prometheus_logo_grey.svg';
|
||||
|
||||
interface NavbarProps {
|
||||
consolesLink: string | null;
|
||||
agentMode: boolean;
|
||||
animateLogo?: boolean | false;
|
||||
}
|
||||
|
||||
const Navigation: FC<NavbarProps> = ({ consolesLink, agentMode }) => {
|
||||
const Navigation: FC<NavbarProps> = ({ consolesLink, agentMode, animateLogo }) => {
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
const toggle = () => setIsOpen(!isOpen);
|
||||
return (
|
||||
<Navbar className="mb-3" dark color="dark" expand="md" fixed="top">
|
||||
<NavbarToggler onClick={toggle} className="mr-2" />
|
||||
<Link className="pt-0 navbar-brand" to={agentMode ? '/agent' : '/graph'}>
|
||||
<img src={logo} className="d-inline-block align-top" alt="Prometheus logo" title="Prometheus" />
|
||||
<PromLogo className={`d-inline-block align-top${animateLogo ? ' animate' : ''}`} title="Prometheus" />
|
||||
Prometheus{agentMode && ' Agent'}
|
||||
</Link>
|
||||
<Collapse isOpen={isOpen} navbar style={{ justifyContent: 'space-between' }}>
|
||||
|
|
3
web/ui/react-app/src/contexts/AnimateLogoContext.tsx
Normal file
3
web/ui/react-app/src/contexts/AnimateLogoContext.tsx
Normal file
|
@ -0,0 +1,3 @@
|
|||
import { createContext } from 'react';
|
||||
|
||||
export const AnimateLogoContext = createContext<boolean>(false);
|
|
@ -1,4 +1,4 @@
|
|||
import React, { Fragment, FC } from 'react';
|
||||
import React, { Fragment, FC, useState, useEffect } from 'react';
|
||||
import { Table } from 'reactstrap';
|
||||
import { withStatusIndicator } from '../../components/withStatusIndicator';
|
||||
import { useFetch } from '../../hooks/useFetch';
|
||||
|
@ -97,7 +97,46 @@ const StatusResult: FC<{ fetchPath: string; title: string }> = ({ fetchPath, tit
|
|||
);
|
||||
};
|
||||
|
||||
const Status: FC<{ agentMode: boolean }> = ({ agentMode }) => {
|
||||
interface StatusProps {
|
||||
agentMode?: boolean | false;
|
||||
setAnimateLogo?: (animateLogo: boolean) => void;
|
||||
}
|
||||
|
||||
const Status: FC<StatusProps> = ({ agentMode, setAnimateLogo }) => {
|
||||
/* _
|
||||
* /' \
|
||||
* | |
|
||||
* \__/ */
|
||||
|
||||
const [inputText, setInputText] = useState('');
|
||||
|
||||
useEffect(() => {
|
||||
const handleKeyPress = (event: KeyboardEvent) => {
|
||||
const keyPressed = event.key.toUpperCase();
|
||||
setInputText((prevInputText) => {
|
||||
const newInputText = prevInputText.slice(-3) + String.fromCharCode(((keyPressed.charCodeAt(0) - 64) % 26) + 65);
|
||||
return newInputText;
|
||||
});
|
||||
};
|
||||
|
||||
document.addEventListener('keypress', handleKeyPress);
|
||||
|
||||
return () => {
|
||||
document.removeEventListener('keypress', handleKeyPress);
|
||||
};
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
if (setAnimateLogo && inputText != '') {
|
||||
setAnimateLogo(inputText.toUpperCase() === 'QSPN');
|
||||
}
|
||||
}, [inputText]);
|
||||
|
||||
/* _
|
||||
* /' \
|
||||
* | |
|
||||
* \__/ */
|
||||
|
||||
const pathPrefix = usePathPrefix();
|
||||
const path = `${pathPrefix}/${API_PATH}`;
|
||||
|
||||
|
|
|
@ -104,9 +104,14 @@ button.execute-btn {
|
|||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.navbar-brand img {
|
||||
.navbar-brand svg {
|
||||
padding-right: 1rem;
|
||||
height: 1.9rem;
|
||||
width: 2.9rem;
|
||||
}
|
||||
|
||||
.navbar-brand svg.animate path {
|
||||
animation: flamecolor 4s ease-in 1 forwards,flame 1s ease-in infinite;
|
||||
}
|
||||
|
||||
.navbar-brand {
|
||||
|
@ -357,3 +362,16 @@ input[type='checkbox']:checked + label {
|
|||
display: block;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
@keyframes flamecolor {
|
||||
100% {
|
||||
fill: #e95224;
|
||||
}
|
||||
}
|
||||
@keyframes flame {
|
||||
0% { d: path("M 56.667,0.667 C 25.372,0.667 0,26.036 0,57.332 c 0,31.295 25.372,56.666 56.667,56.666 31.295,0 56.666,-25.371 56.666,-56.666 0,-31.296 -25.372,-56.665 -56.666,-56.665 z m 0,106.055 c -8.904,0 -16.123,-5.948 -16.123,-13.283 H 72.79 c 0,7.334 -7.219,13.283 -16.123,13.283 z M 83.297,89.04 H 30.034 V 79.382 H 83.298 V 89.04 Z M 83.106,74.411 H 30.186 C 30.01,74.208 29.83,74.008 29.66,73.802 24.208,67.182 22.924,63.726 21.677,60.204 c -0.021,-0.116 6.611,1.355 11.314,2.413 0,0 2.42,0.56 5.958,1.205 -3.397,-3.982 -5.414,-9.044 -5.414,-14.218 0,-11.359 8.712,-21.285 5.569,-29.308 3.059,0.249 6.331,6.456 6.552,16.161 3.252,-4.494 4.613,-12.701 4.613,-17.733 0,-5.21 3.433,-11.262 6.867,-11.469 -3.061,5.045 0.793,9.37 4.219,20.099 1.285,4.03 1.121,10.812 2.113,15.113 C 63.797,33.534 65.333,20.5 71,16 c -2.5,5.667 0.37,12.758 2.333,16.167 3.167,5.5 5.087,9.667 5.087,17.548 0,5.284 -1.951,10.259 -5.242,14.148 3.742,-0.702 6.326,-1.335 6.326,-1.335 l 12.152,-2.371 c 10e-4,-10e-4 -1.765,7.261 -8.55,14.254 z");
|
||||
}
|
||||
50% { d: path("M 56.667,0.667 C 25.372,0.667 0,26.036 0,57.332 c 0,31.295 25.372,56.666 56.667,56.666 31.295,0 56.666,-25.371 56.666,-56.666 0,-31.296 -25.372,-56.665 -56.666,-56.665 z m 0,106.055 c -8.904,0 -16.123,-5.948 -16.123,-13.283 H 72.79 c 0,7.334 -7.219,13.283 -16.123,13.283 z M 83.297,89.04 H 30.034 V 79.382 H 83.298 V 89.04 Z M 83.106,74.411 H 30.186 C 30.01,74.208 29.83,74.008 29.66,73.802 24.208,67.182 22.924,63.726 21.677,60.204 c -0.021,-0.116 6.611,1.355 11.314,2.413 0,0 2.42,0.56 5.958,1.205 -3.397,-3.982 -5.414,-9.044 -5.414,-14.218 0,-11.359 1.640181,-23.047128 7.294982,-29.291475 C 39.391377,29.509803 45.435,26.752 45.656,36.457 c 3.252,-4.494 7.100362,-8.366957 7.100362,-13.398957 0,-5.21 0.137393,-8.650513 -3.479689,-15.0672265 7.834063,1.6180944 8.448052,4.2381285 11.874052,14.9671285 1.285,4.03 1.325275,15.208055 2.317275,19.509055 0.329,-8.933 6.441001,-14.01461 5.163951,-21.391003 5.755224,5.771457 4.934508,10.495521 7.126537,14.288218 3.167,5.5 2.382625,7.496239 2.382625,15.377239 0,5.284 -1.672113,9.232546 -4.963113,13.121546 3.742,-0.702 6.326,-1.335 6.326,-1.335 l 12.152,-2.371 c 10e-4,-10e-4 -1.765,7.261 -8.55,14.254 z"); }
|
||||
100% {
|
||||
d: path("M 56.667,0.667 C 25.372,0.667 0,26.036 0,57.332 c 0,31.295 25.372,56.666 56.667,56.666 31.295,0 56.666,-25.371 56.666,-56.666 0,-31.296 -25.372,-56.665 -56.666,-56.665 z m 0,106.055 c -8.904,0 -16.123,-5.948 -16.123,-13.283 H 72.79 c 0,7.334 -7.219,13.283 -16.123,13.283 z M 83.297,89.04 H 30.034 V 79.382 H 83.298 V 89.04 Z M 83.106,74.411 H 30.186 C 30.01,74.208 29.83,74.008 29.66,73.802 24.208,67.182 22.924,63.726 21.677,60.204 c -0.021,-0.116 6.611,1.355 11.314,2.413 0,0 2.42,0.56 5.958,1.205 -3.397,-3.982 -5.414,-9.044 -5.414,-14.218 0,-11.359 8.712,-21.285 5.569,-29.308 3.059,0.249 6.331,6.456 6.552,16.161 3.252,-4.494 4.613,-12.701 4.613,-17.733 0,-5.21 3.433,-11.262 6.867,-11.469 -3.061,5.045 0.793,9.37 4.219,20.099 1.285,4.03 1.121,10.812 2.113,15.113 C 63.797,33.534 65.333,20.5 71,16 c -2.5,5.667 0.37,12.758 2.333,16.167 3.167,5.5 5.087,9.667 5.087,17.548 0,5.284 -1.951,10.259 -5.242,14.148 3.742,-0.702 6.326,-1.335 6.326,-1.335 l 12.152,-2.371 c 10e-4,-10e-4 -1.765,7.261 -8.55,14.254 z"); }
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue