mirror of
https://github.com/prometheus/prometheus.git
synced 2025-03-05 20:59:13 -08:00
Merge pull request #355 from grafana/sync-prom-native-hist
Sync with upstream for native histograms
This commit is contained in:
commit
14ea52695c
26
.github/ISSUE_TEMPLATE/feature_request.md
vendored
26
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -1,26 +0,0 @@
|
||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Suggest an idea for this project.
|
|
||||||
title: ''
|
|
||||||
labels: ''
|
|
||||||
assignees: ''
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
|
|
||||||
Please do *NOT* ask support questions in Github issues.
|
|
||||||
|
|
||||||
If your issue is not a feature request or bug report use our
|
|
||||||
community support.
|
|
||||||
|
|
||||||
https://prometheus.io/community/
|
|
||||||
|
|
||||||
There is also commercial support available.
|
|
||||||
|
|
||||||
https://prometheus.io/support-training/
|
|
||||||
|
|
||||||
-->
|
|
||||||
## Proposal
|
|
||||||
**Use case. Why is this important?**
|
|
||||||
|
|
||||||
*“Nice to have” is not a good use case. :)*
|
|
23
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
23
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
---
|
||||||
|
name: Feature request
|
||||||
|
description: Suggest an idea for this project.
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >-
|
||||||
|
Please do *NOT* ask support questions in Github issues.
|
||||||
|
|
||||||
|
|
||||||
|
If your issue is not a feature request or bug report use
|
||||||
|
our [community support](https://prometheus.io/community/).
|
||||||
|
|
||||||
|
|
||||||
|
There is also [commercial
|
||||||
|
support](https://prometheus.io/support-training/) available.
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Proposal
|
||||||
|
description: Use case. Why is this important?
|
||||||
|
placeholder: “Nice to have” is not a good use case. :)
|
||||||
|
validations:
|
||||||
|
required: true
|
30
CHANGELOG.md
30
CHANGELOG.md
|
@ -1,5 +1,35 @@
|
||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## 2.40.2 / 2022-11-16
|
||||||
|
|
||||||
|
* [BUGFIX] UI: Fix black-on-black metric name color in dark mode. #11572
|
||||||
|
|
||||||
|
## 2.40.1 / 2022-11-09
|
||||||
|
|
||||||
|
* [BUGFIX] TSDB: Fix alignment for atomic int64 for 32 bit architecture. #11547
|
||||||
|
* [BUGFIX] Scrape: Fix accept headers. #11552
|
||||||
|
|
||||||
|
## 2.40.0 / 2022-11-08
|
||||||
|
|
||||||
|
This release introduces an experimental, native way of representing and storing histograms.
|
||||||
|
|
||||||
|
It can be enabled in Prometheus via `--enable-feature=native-histograms` to accept native histograms.
|
||||||
|
Enabling native histograms will also switch the preferred exposition format to protobuf.
|
||||||
|
|
||||||
|
To instrument your application with native histograms, use the `main` branch of `client_golang` (this will change for the final release when v1.14.0 of client_golang will be out), and set the `NativeHistogramBucketFactor` in your `HistogramOpts` (`1.1` is a good starting point).
|
||||||
|
Your existing histograms won't switch to native histograms until `NativeHistogramBucketFactor` is set.
|
||||||
|
|
||||||
|
* [FEATURE] Add **experimental** support for native histograms. Enable with the flag `--enable-feature=native-histograms`. #11447
|
||||||
|
* [FEATURE] SD: Add service discovery for OVHcloud. #10802
|
||||||
|
* [ENHANCEMENT] Kubernetes SD: Use protobuf encoding. #11353
|
||||||
|
* [ENHANCEMENT] TSDB: Use golang.org/x/exp/slices for improved sorting speed. #11054 #11318 #11380
|
||||||
|
* [ENHANCEMENT] Consul SD: Add enterprise admin partitions. Adds `__meta_consul_partition` label. Adds `partition` config in `consul_sd_config`. #11482
|
||||||
|
* [BUGFIX] API: Fix API error codes for `/api/v1/labels` and `/api/v1/series`. #11356
|
||||||
|
|
||||||
|
## 2.39.2 / 2022-11-09
|
||||||
|
|
||||||
|
* [BUGFIX] TSDB: Fix alignment for atomic int64 for 32 bit architecture. #11547
|
||||||
|
|
||||||
## 2.39.1 / 2022-10-07
|
## 2.39.1 / 2022-10-07
|
||||||
|
|
||||||
* [BUGFIX] Rules: Fix notifier relabel changing the labels on active alerts. #11427
|
* [BUGFIX] Rules: Fix notifier relabel changing the labels on active alerts. #11427
|
||||||
|
|
|
@ -7,7 +7,7 @@ examples and guides.</p>
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
[][circleci]
|
[](https://github.com/prometheus/prometheus/actions/workflows/ci.yml)
|
||||||
[][quay]
|
[][quay]
|
||||||
[][hub]
|
[][hub]
|
||||||
[](https://goreportcard.com/report/github.com/prometheus/prometheus)
|
[](https://goreportcard.com/report/github.com/prometheus/prometheus)
|
||||||
|
@ -178,7 +178,6 @@ For more information on building, running, and developing on the React-based UI,
|
||||||
## More information
|
## More information
|
||||||
|
|
||||||
* Godoc documentation is available via [pkg.go.dev](https://pkg.go.dev/github.com/prometheus/prometheus). Due to peculiarities of Go Modules, v2.x.y will be displayed as v0.x.y.
|
* Godoc documentation is available via [pkg.go.dev](https://pkg.go.dev/github.com/prometheus/prometheus). Due to peculiarities of Go Modules, v2.x.y will be displayed as v0.x.y.
|
||||||
* You will find a CircleCI configuration in [`.circleci/config.yml`](.circleci/config.yml).
|
|
||||||
* See the [Community page](https://prometheus.io/community) for how to reach the Prometheus developers and users on various communication channels.
|
* See the [Community page](https://prometheus.io/community) for how to reach the Prometheus developers and users on various communication channels.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
@ -190,5 +189,4 @@ Refer to [CONTRIBUTING.md](https://github.com/prometheus/prometheus/blob/main/CO
|
||||||
Apache License 2.0, see [LICENSE](https://github.com/prometheus/prometheus/blob/main/LICENSE).
|
Apache License 2.0, see [LICENSE](https://github.com/prometheus/prometheus/blob/main/LICENSE).
|
||||||
|
|
||||||
[hub]: https://hub.docker.com/r/prom/prometheus/
|
[hub]: https://hub.docker.com/r/prom/prometheus/
|
||||||
[circleci]: https://circleci.com/gh/prometheus/prometheus
|
|
||||||
[quay]: https://quay.io/repository/prometheus/prometheus
|
[quay]: https://quay.io/repository/prometheus/prometheus
|
||||||
|
|
|
@ -44,7 +44,7 @@ Release cadence of first pre-releases being cut is 6 weeks.
|
||||||
| v2.37 LTS | 2022-06-29 | Julien Pivotto (GitHub: @roidelapluie) |
|
| v2.37 LTS | 2022-06-29 | Julien Pivotto (GitHub: @roidelapluie) |
|
||||||
| v2.38 | 2022-08-10 | Julius Volz (GitHub: @juliusv) |
|
| v2.38 | 2022-08-10 | Julius Volz (GitHub: @juliusv) |
|
||||||
| v2.39 | 2022-09-21 | Ganesh Vernekar (GitHub: @codesome) |
|
| v2.39 | 2022-09-21 | Ganesh Vernekar (GitHub: @codesome) |
|
||||||
| v2.40 | 2022-11-02 | **searching for volunteer** |
|
| v2.40 | 2022-11-02 | Ganesh Vernekar (GitHub: @codesome) |
|
||||||
| v2.41 | 2022-12-14 | **searching for volunteer** |
|
| v2.41 | 2022-12-14 | **searching for volunteer** |
|
||||||
|
|
||||||
If you are interested in volunteering please create a pull request against the [prometheus/prometheus](https://github.com/prometheus/prometheus) repository and propose yourself for the release series of your choice.
|
If you are interested in volunteering please create a pull request against the [prometheus/prometheus](https://github.com/prometheus/prometheus) repository and propose yourself for the release series of your choice.
|
||||||
|
|
|
@ -45,7 +45,6 @@ import (
|
||||||
promlogflag "github.com/prometheus/common/promlog/flag"
|
promlogflag "github.com/prometheus/common/promlog/flag"
|
||||||
"github.com/prometheus/common/version"
|
"github.com/prometheus/common/version"
|
||||||
toolkit_web "github.com/prometheus/exporter-toolkit/web"
|
toolkit_web "github.com/prometheus/exporter-toolkit/web"
|
||||||
toolkit_webflag "github.com/prometheus/exporter-toolkit/web/kingpinflag"
|
|
||||||
"go.uber.org/atomic"
|
"go.uber.org/atomic"
|
||||||
"go.uber.org/automaxprocs/maxprocs"
|
"go.uber.org/automaxprocs/maxprocs"
|
||||||
"gopkg.in/alecthomas/kingpin.v2"
|
"gopkg.in/alecthomas/kingpin.v2"
|
||||||
|
@ -57,6 +56,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/discovery/legacymanager"
|
"github.com/prometheus/prometheus/discovery/legacymanager"
|
||||||
"github.com/prometheus/prometheus/discovery/targetgroup"
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/metadata"
|
"github.com/prometheus/prometheus/model/metadata"
|
||||||
"github.com/prometheus/prometheus/model/relabel"
|
"github.com/prometheus/prometheus/model/relabel"
|
||||||
|
@ -194,6 +194,10 @@ func (c *flagConfig) setFeatureListOptions(logger log.Logger) error {
|
||||||
case "no-default-scrape-port":
|
case "no-default-scrape-port":
|
||||||
c.scrape.NoDefaultPort = true
|
c.scrape.NoDefaultPort = true
|
||||||
level.Info(logger).Log("msg", "No default port will be appended to scrape targets' addresses.")
|
level.Info(logger).Log("msg", "No default port will be appended to scrape targets' addresses.")
|
||||||
|
case "native-histograms":
|
||||||
|
c.tsdb.EnableNativeHistograms = true
|
||||||
|
c.scrape.EnableProtobufNegotiation = true
|
||||||
|
level.Info(logger).Log("msg", "Experimental native histogram support enabled.")
|
||||||
case "":
|
case "":
|
||||||
continue
|
continue
|
||||||
case "promql-at-modifier", "promql-negative-offset":
|
case "promql-at-modifier", "promql-negative-offset":
|
||||||
|
@ -203,6 +207,12 @@ func (c *flagConfig) setFeatureListOptions(logger log.Logger) error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if c.tsdb.EnableNativeHistograms && c.tsdb.EnableMemorySnapshotOnShutdown {
|
||||||
|
c.tsdb.EnableMemorySnapshotOnShutdown = false
|
||||||
|
level.Warn(logger).Log("msg", "memory-snapshot-on-shutdown has been disabled automatically because memory-snapshot-on-shutdown and native-histograms cannot be enabled at the same time.")
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -240,7 +250,10 @@ func main() {
|
||||||
a.Flag("web.listen-address", "Address to listen on for UI, API, and telemetry.").
|
a.Flag("web.listen-address", "Address to listen on for UI, API, and telemetry.").
|
||||||
Default("0.0.0.0:9090").StringVar(&cfg.web.ListenAddress)
|
Default("0.0.0.0:9090").StringVar(&cfg.web.ListenAddress)
|
||||||
|
|
||||||
webConfig := toolkit_webflag.AddFlags(a)
|
webConfig := a.Flag(
|
||||||
|
"web.config.file",
|
||||||
|
"[EXPERIMENTAL] Path to configuration file that can enable TLS or authentication.",
|
||||||
|
).Default("").String()
|
||||||
|
|
||||||
a.Flag("web.read-timeout",
|
a.Flag("web.read-timeout",
|
||||||
"Maximum duration before timing out read of the request, and closing idle connections.").
|
"Maximum duration before timing out read of the request, and closing idle connections.").
|
||||||
|
@ -395,7 +408,7 @@ func main() {
|
||||||
a.Flag("scrape.discovery-reload-interval", "Interval used by scrape manager to throttle target groups updates.").
|
a.Flag("scrape.discovery-reload-interval", "Interval used by scrape manager to throttle target groups updates.").
|
||||||
Hidden().Default("5s").SetValue(&cfg.scrape.DiscoveryReloadInterval)
|
Hidden().Default("5s").SetValue(&cfg.scrape.DiscoveryReloadInterval)
|
||||||
|
|
||||||
a.Flag("enable-feature", "Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details.").
|
a.Flag("enable-feature", "Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port, native-histograms. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details.").
|
||||||
Default("").StringsVar(&cfg.featureList)
|
Default("").StringsVar(&cfg.featureList)
|
||||||
|
|
||||||
promlogflag.AddFlags(a, &cfg.promlogConfig)
|
promlogflag.AddFlags(a, &cfg.promlogConfig)
|
||||||
|
@ -1380,6 +1393,10 @@ func (n notReadyAppender) AppendExemplar(ref storage.SeriesRef, l labels.Labels,
|
||||||
return 0, tsdb.ErrNotReady
|
return 0, tsdb.ErrNotReady
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (n notReadyAppender) AppendHistogram(ref storage.SeriesRef, l labels.Labels, t int64, h *histogram.Histogram) (storage.SeriesRef, error) {
|
||||||
|
return 0, tsdb.ErrNotReady
|
||||||
|
}
|
||||||
|
|
||||||
func (n notReadyAppender) UpdateMetadata(ref storage.SeriesRef, l labels.Labels, m metadata.Metadata) (storage.SeriesRef, error) {
|
func (n notReadyAppender) UpdateMetadata(ref storage.SeriesRef, l labels.Labels, m metadata.Metadata) (storage.SeriesRef, error) {
|
||||||
return 0, tsdb.ErrNotReady
|
return 0, tsdb.ErrNotReady
|
||||||
}
|
}
|
||||||
|
@ -1510,6 +1527,7 @@ type tsdbOptions struct {
|
||||||
EnableExemplarStorage bool
|
EnableExemplarStorage bool
|
||||||
MaxExemplars int64
|
MaxExemplars int64
|
||||||
EnableMemorySnapshotOnShutdown bool
|
EnableMemorySnapshotOnShutdown bool
|
||||||
|
EnableNativeHistograms bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func (opts tsdbOptions) ToTSDBOptions() tsdb.Options {
|
func (opts tsdbOptions) ToTSDBOptions() tsdb.Options {
|
||||||
|
@ -1528,6 +1546,7 @@ func (opts tsdbOptions) ToTSDBOptions() tsdb.Options {
|
||||||
EnableExemplarStorage: opts.EnableExemplarStorage,
|
EnableExemplarStorage: opts.EnableExemplarStorage,
|
||||||
MaxExemplars: opts.MaxExemplars,
|
MaxExemplars: opts.MaxExemplars,
|
||||||
EnableMemorySnapshotOnShutdown: opts.EnableMemorySnapshotOnShutdown,
|
EnableMemorySnapshotOnShutdown: opts.EnableMemorySnapshotOnShutdown,
|
||||||
|
EnableNativeHistograms: opts.EnableNativeHistograms,
|
||||||
OutOfOrderTimeWindow: opts.OutOfOrderTimeWindow,
|
OutOfOrderTimeWindow: opts.OutOfOrderTimeWindow,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -25,6 +25,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
"github.com/prometheus/prometheus/tsdb"
|
"github.com/prometheus/prometheus/tsdb"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
type backfillSample struct {
|
type backfillSample struct {
|
||||||
|
@ -50,7 +51,7 @@ func queryAllSeries(t testing.TB, q storage.Querier, expectedMinTime, expectedMa
|
||||||
series := ss.At()
|
series := ss.At()
|
||||||
it := series.Iterator()
|
it := series.Iterator()
|
||||||
require.NoError(t, it.Err())
|
require.NoError(t, it.Err())
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
ts, v := it.At()
|
ts, v := it.At()
|
||||||
samples = append(samples, backfillSample{Timestamp: ts, Value: v, Labels: series.Labels()})
|
samples = append(samples, backfillSample{Timestamp: ts, Value: v, Labels: series.Labels()})
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,6 +28,7 @@ import (
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/tsdb"
|
"github.com/prometheus/prometheus/tsdb"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
type mockQueryRangeAPI struct {
|
type mockQueryRangeAPI struct {
|
||||||
|
@ -139,7 +140,7 @@ func TestBackfillRuleIntegration(t *testing.T) {
|
||||||
require.Equal(t, 3, len(series.Labels()))
|
require.Equal(t, 3, len(series.Labels()))
|
||||||
}
|
}
|
||||||
it := series.Iterator()
|
it := series.Iterator()
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
samplesCount++
|
samplesCount++
|
||||||
ts, v := it.At()
|
ts, v := it.At()
|
||||||
if v == testValue {
|
if v == testValue {
|
||||||
|
|
|
@ -31,6 +31,7 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/tsdb/index"
|
"github.com/prometheus/prometheus/tsdb/index"
|
||||||
|
|
||||||
"github.com/alecthomas/units"
|
"github.com/alecthomas/units"
|
||||||
|
@ -644,7 +645,7 @@ func dumpSamples(path string, mint, maxt int64) (err error) {
|
||||||
series := ss.At()
|
series := ss.At()
|
||||||
lbs := series.Labels()
|
lbs := series.Labels()
|
||||||
it := series.Iterator()
|
it := series.Iterator()
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
ts, val := it.At()
|
ts, val := it.At()
|
||||||
fmt.Printf("%s %g %d\n", lbs, val, ts)
|
fmt.Printf("%s %g %d\n", lbs, val, ts)
|
||||||
}
|
}
|
||||||
|
|
|
@ -447,7 +447,7 @@ func query(ctx context.Context, qs string, t time.Time, engine *promql.Engine, q
|
||||||
return v, nil
|
return v, nil
|
||||||
case promql.Scalar:
|
case promql.Scalar:
|
||||||
return promql.Vector{promql.Sample{
|
return promql.Vector{promql.Sample{
|
||||||
Point: promql.Point(v),
|
Point: promql.Point{T: v.T, V: v.V},
|
||||||
Metric: labels.Labels{},
|
Metric: labels.Labels{},
|
||||||
}}, nil
|
}}, nil
|
||||||
default:
|
default:
|
||||||
|
|
|
@ -776,12 +776,13 @@ func CheckTargetAddress(address model.LabelValue) error {
|
||||||
|
|
||||||
// RemoteWriteConfig is the configuration for writing to remote storage.
|
// RemoteWriteConfig is the configuration for writing to remote storage.
|
||||||
type RemoteWriteConfig struct {
|
type RemoteWriteConfig struct {
|
||||||
URL *config.URL `yaml:"url"`
|
URL *config.URL `yaml:"url"`
|
||||||
RemoteTimeout model.Duration `yaml:"remote_timeout,omitempty"`
|
RemoteTimeout model.Duration `yaml:"remote_timeout,omitempty"`
|
||||||
Headers map[string]string `yaml:"headers,omitempty"`
|
Headers map[string]string `yaml:"headers,omitempty"`
|
||||||
WriteRelabelConfigs []*relabel.Config `yaml:"write_relabel_configs,omitempty"`
|
WriteRelabelConfigs []*relabel.Config `yaml:"write_relabel_configs,omitempty"`
|
||||||
Name string `yaml:"name,omitempty"`
|
Name string `yaml:"name,omitempty"`
|
||||||
SendExemplars bool `yaml:"send_exemplars,omitempty"`
|
SendExemplars bool `yaml:"send_exemplars,omitempty"`
|
||||||
|
SendNativeHistograms bool `yaml:"send_native_histograms,omitempty"`
|
||||||
|
|
||||||
// We cannot do proper Go type embedding below as the parser will then parse
|
// We cannot do proper Go type embedding below as the parser will then parse
|
||||||
// values arbitrarily into the overflow maps of further-down types.
|
// values arbitrarily into the overflow maps of further-down types.
|
||||||
|
|
|
@ -47,6 +47,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/discovery/moby"
|
"github.com/prometheus/prometheus/discovery/moby"
|
||||||
"github.com/prometheus/prometheus/discovery/nomad"
|
"github.com/prometheus/prometheus/discovery/nomad"
|
||||||
"github.com/prometheus/prometheus/discovery/openstack"
|
"github.com/prometheus/prometheus/discovery/openstack"
|
||||||
|
"github.com/prometheus/prometheus/discovery/ovhcloud"
|
||||||
"github.com/prometheus/prometheus/discovery/puppetdb"
|
"github.com/prometheus/prometheus/discovery/puppetdb"
|
||||||
"github.com/prometheus/prometheus/discovery/scaleway"
|
"github.com/prometheus/prometheus/discovery/scaleway"
|
||||||
"github.com/prometheus/prometheus/discovery/targetgroup"
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
|
@ -216,26 +217,45 @@ var expectedConf = &Config{
|
||||||
Regex: relabel.MustNewRegexp("(.*)some-[regex]"),
|
Regex: relabel.MustNewRegexp("(.*)some-[regex]"),
|
||||||
Replacement: "foo-${1}",
|
Replacement: "foo-${1}",
|
||||||
Action: relabel.Replace,
|
Action: relabel.Replace,
|
||||||
}, {
|
},
|
||||||
|
{
|
||||||
SourceLabels: model.LabelNames{"abc"},
|
SourceLabels: model.LabelNames{"abc"},
|
||||||
TargetLabel: "cde",
|
TargetLabel: "cde",
|
||||||
Separator: ";",
|
Separator: ";",
|
||||||
Regex: relabel.DefaultRelabelConfig.Regex,
|
Regex: relabel.DefaultRelabelConfig.Regex,
|
||||||
Replacement: relabel.DefaultRelabelConfig.Replacement,
|
Replacement: relabel.DefaultRelabelConfig.Replacement,
|
||||||
Action: relabel.Replace,
|
Action: relabel.Replace,
|
||||||
}, {
|
},
|
||||||
|
{
|
||||||
TargetLabel: "abc",
|
TargetLabel: "abc",
|
||||||
Separator: ";",
|
Separator: ";",
|
||||||
Regex: relabel.DefaultRelabelConfig.Regex,
|
Regex: relabel.DefaultRelabelConfig.Regex,
|
||||||
Replacement: "static",
|
Replacement: "static",
|
||||||
Action: relabel.Replace,
|
Action: relabel.Replace,
|
||||||
}, {
|
},
|
||||||
|
{
|
||||||
TargetLabel: "abc",
|
TargetLabel: "abc",
|
||||||
Separator: ";",
|
Separator: ";",
|
||||||
Regex: relabel.MustNewRegexp(""),
|
Regex: relabel.MustNewRegexp(""),
|
||||||
Replacement: "static",
|
Replacement: "static",
|
||||||
Action: relabel.Replace,
|
Action: relabel.Replace,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"foo"},
|
||||||
|
TargetLabel: "abc",
|
||||||
|
Action: relabel.KeepEqual,
|
||||||
|
Regex: relabel.DefaultRelabelConfig.Regex,
|
||||||
|
Replacement: relabel.DefaultRelabelConfig.Replacement,
|
||||||
|
Separator: relabel.DefaultRelabelConfig.Separator,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"foo"},
|
||||||
|
TargetLabel: "abc",
|
||||||
|
Action: relabel.DropEqual,
|
||||||
|
Regex: relabel.DefaultRelabelConfig.Regex,
|
||||||
|
Replacement: relabel.DefaultRelabelConfig.Replacement,
|
||||||
|
Separator: relabel.DefaultRelabelConfig.Separator,
|
||||||
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -940,6 +960,35 @@ var expectedConf = &Config{
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
JobName: "ovhcloud",
|
||||||
|
|
||||||
|
HonorTimestamps: true,
|
||||||
|
ScrapeInterval: model.Duration(15 * time.Second),
|
||||||
|
ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout,
|
||||||
|
HTTPClientConfig: config.DefaultHTTPClientConfig,
|
||||||
|
MetricsPath: DefaultScrapeConfig.MetricsPath,
|
||||||
|
Scheme: DefaultScrapeConfig.Scheme,
|
||||||
|
|
||||||
|
ServiceDiscoveryConfigs: discovery.Configs{
|
||||||
|
&ovhcloud.SDConfig{
|
||||||
|
Endpoint: "ovh-eu",
|
||||||
|
ApplicationKey: "testAppKey",
|
||||||
|
ApplicationSecret: "testAppSecret",
|
||||||
|
ConsumerKey: "testConsumerKey",
|
||||||
|
RefreshInterval: model.Duration(60 * time.Second),
|
||||||
|
Service: "vps",
|
||||||
|
},
|
||||||
|
&ovhcloud.SDConfig{
|
||||||
|
Endpoint: "ovh-eu",
|
||||||
|
ApplicationKey: "testAppKey",
|
||||||
|
ApplicationSecret: "testAppSecret",
|
||||||
|
ConsumerKey: "testConsumerKey",
|
||||||
|
RefreshInterval: model.Duration(60 * time.Second),
|
||||||
|
Service: "dedicated_server",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
{
|
{
|
||||||
JobName: "scaleway",
|
JobName: "scaleway",
|
||||||
|
|
||||||
|
@ -1175,7 +1224,7 @@ func TestElideSecrets(t *testing.T) {
|
||||||
yamlConfig := string(config)
|
yamlConfig := string(config)
|
||||||
|
|
||||||
matches := secretRe.FindAllStringIndex(yamlConfig, -1)
|
matches := secretRe.FindAllStringIndex(yamlConfig, -1)
|
||||||
require.Equal(t, 18, len(matches), "wrong number of secret matches found")
|
require.Equal(t, 22, len(matches), "wrong number of secret matches found")
|
||||||
require.NotContains(t, yamlConfig, "mysecret",
|
require.NotContains(t, yamlConfig, "mysecret",
|
||||||
"yaml marshal reveals authentication credentials.")
|
"yaml marshal reveals authentication credentials.")
|
||||||
}
|
}
|
||||||
|
@ -1286,6 +1335,22 @@ var expectedErrors = []struct {
|
||||||
filename: "labeldrop5.bad.yml",
|
filename: "labeldrop5.bad.yml",
|
||||||
errMsg: "labeldrop action requires only 'regex', and no other fields",
|
errMsg: "labeldrop action requires only 'regex', and no other fields",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
filename: "dropequal.bad.yml",
|
||||||
|
errMsg: "relabel configuration for dropequal action requires 'target_label' value",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filename: "dropequal1.bad.yml",
|
||||||
|
errMsg: "dropequal action requires only 'source_labels' and `target_label`, and no other fields",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filename: "keepequal.bad.yml",
|
||||||
|
errMsg: "relabel configuration for keepequal action requires 'target_label' value",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filename: "keepequal1.bad.yml",
|
||||||
|
errMsg: "keepequal action requires only 'source_labels' and `target_label`, and no other fields",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
filename: "labelmap.bad.yml",
|
filename: "labelmap.bad.yml",
|
||||||
errMsg: "\"l-$1\" is invalid 'replacement' for labelmap action",
|
errMsg: "\"l-$1\" is invalid 'replacement' for labelmap action",
|
||||||
|
@ -1618,6 +1683,14 @@ var expectedErrors = []struct {
|
||||||
filename: "ionos_datacenter.bad.yml",
|
filename: "ionos_datacenter.bad.yml",
|
||||||
errMsg: "datacenter id can't be empty",
|
errMsg: "datacenter id can't be empty",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
filename: "ovhcloud_no_secret.bad.yml",
|
||||||
|
errMsg: "application secret can not be empty",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
filename: "ovhcloud_bad_service.bad.yml",
|
||||||
|
errMsg: "unknown service: fakeservice",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBadConfigs(t *testing.T) {
|
func TestBadConfigs(t *testing.T) {
|
||||||
|
|
21
config/testdata/conf.good.yml
vendored
21
config/testdata/conf.good.yml
vendored
|
@ -87,6 +87,12 @@ scrape_configs:
|
||||||
- regex:
|
- regex:
|
||||||
replacement: static
|
replacement: static
|
||||||
target_label: abc
|
target_label: abc
|
||||||
|
- source_labels: [foo]
|
||||||
|
target_label: abc
|
||||||
|
action: keepequal
|
||||||
|
- source_labels: [foo]
|
||||||
|
target_label: abc
|
||||||
|
action: dropequal
|
||||||
|
|
||||||
authorization:
|
authorization:
|
||||||
credentials_file: valid_token_file
|
credentials_file: valid_token_file
|
||||||
|
@ -349,6 +355,21 @@ scrape_configs:
|
||||||
eureka_sd_configs:
|
eureka_sd_configs:
|
||||||
- server: "http://eureka.example.com:8761/eureka"
|
- server: "http://eureka.example.com:8761/eureka"
|
||||||
|
|
||||||
|
- job_name: ovhcloud
|
||||||
|
ovhcloud_sd_configs:
|
||||||
|
- service: vps
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: testAppKey
|
||||||
|
application_secret: testAppSecret
|
||||||
|
consumer_key: testConsumerKey
|
||||||
|
refresh_interval: 1m
|
||||||
|
- service: dedicated_server
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: testAppKey
|
||||||
|
application_secret: testAppSecret
|
||||||
|
consumer_key: testConsumerKey
|
||||||
|
refresh_interval: 1m
|
||||||
|
|
||||||
- job_name: scaleway
|
- job_name: scaleway
|
||||||
scaleway_sd_configs:
|
scaleway_sd_configs:
|
||||||
- role: instance
|
- role: instance
|
||||||
|
|
5
config/testdata/dropequal.bad.yml
vendored
Normal file
5
config/testdata/dropequal.bad.yml
vendored
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: prometheus
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [abcdef]
|
||||||
|
action: dropequal
|
7
config/testdata/dropequal1.bad.yml
vendored
Normal file
7
config/testdata/dropequal1.bad.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: prometheus
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [abcdef]
|
||||||
|
action: dropequal
|
||||||
|
regex: foo
|
||||||
|
target_label: bar
|
5
config/testdata/keepequal.bad.yml
vendored
Normal file
5
config/testdata/keepequal.bad.yml
vendored
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: prometheus
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [abcdef]
|
||||||
|
action: keepequal
|
7
config/testdata/keepequal1.bad.yml
vendored
Normal file
7
config/testdata/keepequal1.bad.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: prometheus
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [abcdef]
|
||||||
|
action: keepequal
|
||||||
|
regex: foo
|
||||||
|
target_label: bar
|
8
config/testdata/ovhcloud_bad_service.bad.yml
vendored
Normal file
8
config/testdata/ovhcloud_bad_service.bad.yml
vendored
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
scrape_configs:
|
||||||
|
- ovhcloud_sd_configs:
|
||||||
|
- service: fakeservice
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: testAppKey
|
||||||
|
application_secret: testAppSecret
|
||||||
|
consumer_key: testConsumerKey
|
||||||
|
refresh_interval: 1m
|
7
config/testdata/ovhcloud_no_secret.bad.yml
vendored
Normal file
7
config/testdata/ovhcloud_no_secret.bad.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
scrape_configs:
|
||||||
|
- ovhcloud_sd_configs:
|
||||||
|
- service: dedicated_server
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: testAppKey
|
||||||
|
consumer_key: testConsumerKey
|
||||||
|
refresh_interval: 1m
|
|
@ -60,6 +60,8 @@ const (
|
||||||
datacenterLabel = model.MetaLabelPrefix + "consul_dc"
|
datacenterLabel = model.MetaLabelPrefix + "consul_dc"
|
||||||
// namespaceLabel is the name of the label containing the namespace (Consul Enterprise only).
|
// namespaceLabel is the name of the label containing the namespace (Consul Enterprise only).
|
||||||
namespaceLabel = model.MetaLabelPrefix + "consul_namespace"
|
namespaceLabel = model.MetaLabelPrefix + "consul_namespace"
|
||||||
|
// partitionLabel is the name of the label containing the Admin Partition (Consul Enterprise only).
|
||||||
|
partitionLabel = model.MetaLabelPrefix + "consul_partition"
|
||||||
// taggedAddressesLabel is the prefix for the labels mapping to a target's tagged addresses.
|
// taggedAddressesLabel is the prefix for the labels mapping to a target's tagged addresses.
|
||||||
taggedAddressesLabel = model.MetaLabelPrefix + "consul_tagged_address_"
|
taggedAddressesLabel = model.MetaLabelPrefix + "consul_tagged_address_"
|
||||||
// serviceIDLabel is the name of the label containing the service ID.
|
// serviceIDLabel is the name of the label containing the service ID.
|
||||||
|
@ -112,6 +114,7 @@ type SDConfig struct {
|
||||||
Token config.Secret `yaml:"token,omitempty"`
|
Token config.Secret `yaml:"token,omitempty"`
|
||||||
Datacenter string `yaml:"datacenter,omitempty"`
|
Datacenter string `yaml:"datacenter,omitempty"`
|
||||||
Namespace string `yaml:"namespace,omitempty"`
|
Namespace string `yaml:"namespace,omitempty"`
|
||||||
|
Partition string `yaml:"partition,omitempty"`
|
||||||
TagSeparator string `yaml:"tag_separator,omitempty"`
|
TagSeparator string `yaml:"tag_separator,omitempty"`
|
||||||
Scheme string `yaml:"scheme,omitempty"`
|
Scheme string `yaml:"scheme,omitempty"`
|
||||||
Username string `yaml:"username,omitempty"`
|
Username string `yaml:"username,omitempty"`
|
||||||
|
@ -183,6 +186,7 @@ type Discovery struct {
|
||||||
client *consul.Client
|
client *consul.Client
|
||||||
clientDatacenter string
|
clientDatacenter string
|
||||||
clientNamespace string
|
clientNamespace string
|
||||||
|
clientPartition string
|
||||||
tagSeparator string
|
tagSeparator string
|
||||||
watchedServices []string // Set of services which will be discovered.
|
watchedServices []string // Set of services which will be discovered.
|
||||||
watchedTags []string // Tags used to filter instances of a service.
|
watchedTags []string // Tags used to filter instances of a service.
|
||||||
|
@ -210,6 +214,7 @@ func NewDiscovery(conf *SDConfig, logger log.Logger) (*Discovery, error) {
|
||||||
Scheme: conf.Scheme,
|
Scheme: conf.Scheme,
|
||||||
Datacenter: conf.Datacenter,
|
Datacenter: conf.Datacenter,
|
||||||
Namespace: conf.Namespace,
|
Namespace: conf.Namespace,
|
||||||
|
Partition: conf.Partition,
|
||||||
Token: string(conf.Token),
|
Token: string(conf.Token),
|
||||||
HttpClient: wrapper,
|
HttpClient: wrapper,
|
||||||
}
|
}
|
||||||
|
@ -227,6 +232,7 @@ func NewDiscovery(conf *SDConfig, logger log.Logger) (*Discovery, error) {
|
||||||
refreshInterval: time.Duration(conf.RefreshInterval),
|
refreshInterval: time.Duration(conf.RefreshInterval),
|
||||||
clientDatacenter: conf.Datacenter,
|
clientDatacenter: conf.Datacenter,
|
||||||
clientNamespace: conf.Namespace,
|
clientNamespace: conf.Namespace,
|
||||||
|
clientPartition: conf.Partition,
|
||||||
finalizer: wrapper.CloseIdleConnections,
|
finalizer: wrapper.CloseIdleConnections,
|
||||||
logger: logger,
|
logger: logger,
|
||||||
}
|
}
|
||||||
|
@ -547,6 +553,7 @@ func (srv *consulService) watch(ctx context.Context, ch chan<- []*targetgroup.Gr
|
||||||
addressLabel: model.LabelValue(serviceNode.Node.Address),
|
addressLabel: model.LabelValue(serviceNode.Node.Address),
|
||||||
nodeLabel: model.LabelValue(serviceNode.Node.Node),
|
nodeLabel: model.LabelValue(serviceNode.Node.Node),
|
||||||
namespaceLabel: model.LabelValue(serviceNode.Service.Namespace),
|
namespaceLabel: model.LabelValue(serviceNode.Service.Namespace),
|
||||||
|
partitionLabel: model.LabelValue(serviceNode.Service.Partition),
|
||||||
tagsLabel: model.LabelValue(tags),
|
tagsLabel: model.LabelValue(tags),
|
||||||
serviceAddressLabel: model.LabelValue(serviceNode.Service.Address),
|
serviceAddressLabel: model.LabelValue(serviceNode.Service.Address),
|
||||||
servicePortLabel: model.LabelValue(strconv.Itoa(serviceNode.Service.Port)),
|
servicePortLabel: model.LabelValue(strconv.Itoa(serviceNode.Service.Port)),
|
||||||
|
|
|
@ -39,18 +39,23 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
fileSDReadErrorsCount = prometheus.NewCounter(
|
||||||
|
prometheus.CounterOpts{
|
||||||
|
Name: "prometheus_sd_file_read_errors_total",
|
||||||
|
Help: "The number of File-SD read errors.",
|
||||||
|
})
|
||||||
fileSDScanDuration = prometheus.NewSummary(
|
fileSDScanDuration = prometheus.NewSummary(
|
||||||
prometheus.SummaryOpts{
|
prometheus.SummaryOpts{
|
||||||
Name: "prometheus_sd_file_scan_duration_seconds",
|
Name: "prometheus_sd_file_scan_duration_seconds",
|
||||||
Help: "The duration of the File-SD scan in seconds.",
|
Help: "The duration of the File-SD scan in seconds.",
|
||||||
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
|
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
|
||||||
})
|
})
|
||||||
fileSDReadErrorsCount = prometheus.NewCounter(
|
fileSDTimeStamp = NewTimestampCollector()
|
||||||
|
fileWatcherErrorsCount = prometheus.NewCounter(
|
||||||
prometheus.CounterOpts{
|
prometheus.CounterOpts{
|
||||||
Name: "prometheus_sd_file_read_errors_total",
|
Name: "prometheus_sd_file_watcher_errors_total",
|
||||||
Help: "The number of File-SD read errors.",
|
Help: "The number of File-SD errors caused by filesystem watch failures.",
|
||||||
})
|
})
|
||||||
fileSDTimeStamp = NewTimestampCollector()
|
|
||||||
|
|
||||||
patFileSDName = regexp.MustCompile(`^[^*]*(\*[^/]*)?\.(json|yml|yaml|JSON|YML|YAML)$`)
|
patFileSDName = regexp.MustCompile(`^[^*]*(\*[^/]*)?\.(json|yml|yaml|JSON|YML|YAML)$`)
|
||||||
|
|
||||||
|
@ -62,7 +67,7 @@ var (
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
discovery.RegisterConfig(&SDConfig{})
|
discovery.RegisterConfig(&SDConfig{})
|
||||||
prometheus.MustRegister(fileSDScanDuration, fileSDReadErrorsCount, fileSDTimeStamp)
|
prometheus.MustRegister(fileSDReadErrorsCount, fileSDScanDuration, fileSDTimeStamp, fileWatcherErrorsCount)
|
||||||
}
|
}
|
||||||
|
|
||||||
// SDConfig is the configuration for file based discovery.
|
// SDConfig is the configuration for file based discovery.
|
||||||
|
@ -237,6 +242,7 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
|
||||||
watcher, err := fsnotify.NewWatcher()
|
watcher, err := fsnotify.NewWatcher()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
level.Error(d.logger).Log("msg", "Error adding file watcher", "err", err)
|
level.Error(d.logger).Log("msg", "Error adding file watcher", "err", err)
|
||||||
|
fileWatcherErrorsCount.Inc()
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
d.watcher = watcher
|
d.watcher = watcher
|
||||||
|
|
|
@ -33,6 +33,7 @@ import (
|
||||||
_ "github.com/prometheus/prometheus/discovery/moby" // register moby
|
_ "github.com/prometheus/prometheus/discovery/moby" // register moby
|
||||||
_ "github.com/prometheus/prometheus/discovery/nomad" // register nomad
|
_ "github.com/prometheus/prometheus/discovery/nomad" // register nomad
|
||||||
_ "github.com/prometheus/prometheus/discovery/openstack" // register openstack
|
_ "github.com/prometheus/prometheus/discovery/openstack" // register openstack
|
||||||
|
_ "github.com/prometheus/prometheus/discovery/ovhcloud" // register ovhcloud
|
||||||
_ "github.com/prometheus/prometheus/discovery/puppetdb" // register puppetdb
|
_ "github.com/prometheus/prometheus/discovery/puppetdb" // register puppetdb
|
||||||
_ "github.com/prometheus/prometheus/discovery/scaleway" // register scaleway
|
_ "github.com/prometheus/prometheus/discovery/scaleway" // register scaleway
|
||||||
_ "github.com/prometheus/prometheus/discovery/triton" // register triton
|
_ "github.com/prometheus/prometheus/discovery/triton" // register triton
|
||||||
|
|
163
discovery/ovhcloud/dedicated_server.go
Normal file
163
discovery/ovhcloud/dedicated_server.go
Normal file
|
@ -0,0 +1,163 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/netip"
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/go-kit/log"
|
||||||
|
"github.com/go-kit/log/level"
|
||||||
|
"github.com/ovh/go-ovh/ovh"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/discovery/refresh"
|
||||||
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
dedicatedServerAPIPath = "/dedicated/server"
|
||||||
|
dedicatedServerLabelPrefix = metaLabelPrefix + "dedicated_server_"
|
||||||
|
)
|
||||||
|
|
||||||
|
// dedicatedServer struct from API. Also contains IP addresses that are fetched
|
||||||
|
// independently.
|
||||||
|
type dedicatedServer struct {
|
||||||
|
State string `json:"state"`
|
||||||
|
ips []netip.Addr
|
||||||
|
CommercialRange string `json:"commercialRange"`
|
||||||
|
LinkSpeed int `json:"linkSpeed"`
|
||||||
|
Rack string `json:"rack"`
|
||||||
|
NoIntervention bool `json:"noIntervention"`
|
||||||
|
Os string `json:"os"`
|
||||||
|
SupportLevel string `json:"supportLevel"`
|
||||||
|
ServerID int64 `json:"serverId"`
|
||||||
|
Reverse string `json:"reverse"`
|
||||||
|
Datacenter string `json:"datacenter"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type dedicatedServerDiscovery struct {
|
||||||
|
*refresh.Discovery
|
||||||
|
config *SDConfig
|
||||||
|
logger log.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
func newDedicatedServerDiscovery(conf *SDConfig, logger log.Logger) *dedicatedServerDiscovery {
|
||||||
|
return &dedicatedServerDiscovery{config: conf, logger: logger}
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDedicatedServerList(client *ovh.Client) ([]string, error) {
|
||||||
|
var dedicatedListName []string
|
||||||
|
err := client.Get(dedicatedServerAPIPath, &dedicatedListName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dedicatedListName, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDedicatedServerDetails(client *ovh.Client, serverName string) (*dedicatedServer, error) {
|
||||||
|
var dedicatedServerDetails dedicatedServer
|
||||||
|
err := client.Get(path.Join(dedicatedServerAPIPath, url.QueryEscape(serverName)), &dedicatedServerDetails)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var ips []string
|
||||||
|
err = client.Get(path.Join(dedicatedServerAPIPath, url.QueryEscape(serverName), "ips"), &ips)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
parsedIPs, err := parseIPList(ips)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
dedicatedServerDetails.ips = parsedIPs
|
||||||
|
return &dedicatedServerDetails, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dedicatedServerDiscovery) getService() string {
|
||||||
|
return "dedicated_server"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dedicatedServerDiscovery) getSource() string {
|
||||||
|
return fmt.Sprintf("%s_%s", d.config.Name(), d.getService())
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dedicatedServerDiscovery) refresh(ctx context.Context) ([]*targetgroup.Group, error) {
|
||||||
|
client, err := createClient(d.config)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var dedicatedServerDetailedList []dedicatedServer
|
||||||
|
dedicatedServerList, err := getDedicatedServerList(client)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, dedicatedServerName := range dedicatedServerList {
|
||||||
|
dedicatedServer, err := getDedicatedServerDetails(client, dedicatedServerName)
|
||||||
|
if err != nil {
|
||||||
|
err := level.Warn(d.logger).Log("msg", fmt.Sprintf("%s: Could not get details of %s", d.getSource(), dedicatedServerName), "err", err.Error())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
dedicatedServerDetailedList = append(dedicatedServerDetailedList, *dedicatedServer)
|
||||||
|
}
|
||||||
|
var targets []model.LabelSet
|
||||||
|
|
||||||
|
for _, server := range dedicatedServerDetailedList {
|
||||||
|
var ipv4, ipv6 string
|
||||||
|
for _, ip := range server.ips {
|
||||||
|
if ip.Is4() {
|
||||||
|
ipv4 = ip.String()
|
||||||
|
}
|
||||||
|
if ip.Is6() {
|
||||||
|
ipv6 = ip.String()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defaultIP := ipv4
|
||||||
|
if defaultIP == "" {
|
||||||
|
defaultIP = ipv6
|
||||||
|
}
|
||||||
|
labels := model.LabelSet{
|
||||||
|
model.AddressLabel: model.LabelValue(defaultIP),
|
||||||
|
model.InstanceLabel: model.LabelValue(server.Name),
|
||||||
|
dedicatedServerLabelPrefix + "state": model.LabelValue(server.State),
|
||||||
|
dedicatedServerLabelPrefix + "commercial_range": model.LabelValue(server.CommercialRange),
|
||||||
|
dedicatedServerLabelPrefix + "link_speed": model.LabelValue(fmt.Sprintf("%d", server.LinkSpeed)),
|
||||||
|
dedicatedServerLabelPrefix + "rack": model.LabelValue(server.Rack),
|
||||||
|
dedicatedServerLabelPrefix + "no_intervention": model.LabelValue(strconv.FormatBool(server.NoIntervention)),
|
||||||
|
dedicatedServerLabelPrefix + "os": model.LabelValue(server.Os),
|
||||||
|
dedicatedServerLabelPrefix + "support_level": model.LabelValue(server.SupportLevel),
|
||||||
|
dedicatedServerLabelPrefix + "server_id": model.LabelValue(fmt.Sprintf("%d", server.ServerID)),
|
||||||
|
dedicatedServerLabelPrefix + "reverse": model.LabelValue(server.Reverse),
|
||||||
|
dedicatedServerLabelPrefix + "datacenter": model.LabelValue(server.Datacenter),
|
||||||
|
dedicatedServerLabelPrefix + "name": model.LabelValue(server.Name),
|
||||||
|
dedicatedServerLabelPrefix + "ipv4": model.LabelValue(ipv4),
|
||||||
|
dedicatedServerLabelPrefix + "ipv6": model.LabelValue(ipv6),
|
||||||
|
}
|
||||||
|
targets = append(targets, labels)
|
||||||
|
}
|
||||||
|
|
||||||
|
return []*targetgroup.Group{{Source: d.getSource(), Targets: targets}}, nil
|
||||||
|
}
|
123
discovery/ovhcloud/dedicated_server_test.go
Normal file
123
discovery/ovhcloud/dedicated_server_test.go
Normal file
|
@ -0,0 +1,123 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/go-kit/log"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"gopkg.in/yaml.v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestOvhcloudDedicatedServerRefresh(t *testing.T) {
|
||||||
|
var cfg SDConfig
|
||||||
|
|
||||||
|
mock := httptest.NewServer(http.HandlerFunc(MockDedicatedAPI))
|
||||||
|
defer mock.Close()
|
||||||
|
cfgString := fmt.Sprintf(`
|
||||||
|
---
|
||||||
|
service: dedicated_server
|
||||||
|
endpoint: %s
|
||||||
|
application_key: %s
|
||||||
|
application_secret: %s
|
||||||
|
consumer_key: %s`, mock.URL, ovhcloudApplicationKeyTest, ovhcloudApplicationSecretTest, ovhcloudConsumerKeyTest)
|
||||||
|
|
||||||
|
require.NoError(t, yaml.UnmarshalStrict([]byte(cfgString), &cfg))
|
||||||
|
d, err := newRefresher(&cfg, log.NewNopLogger())
|
||||||
|
require.NoError(t, err)
|
||||||
|
ctx := context.Background()
|
||||||
|
targetGroups, err := d.refresh(ctx)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(targetGroups))
|
||||||
|
targetGroup := targetGroups[0]
|
||||||
|
require.NotNil(t, targetGroup)
|
||||||
|
require.NotNil(t, targetGroup.Targets)
|
||||||
|
require.Equal(t, 1, len(targetGroup.Targets))
|
||||||
|
|
||||||
|
for i, lbls := range []model.LabelSet{
|
||||||
|
{
|
||||||
|
"__address__": "1.2.3.4",
|
||||||
|
"__meta_ovhcloud_dedicated_server_commercial_range": "Advance-1 Gen 2",
|
||||||
|
"__meta_ovhcloud_dedicated_server_datacenter": "gra3",
|
||||||
|
"__meta_ovhcloud_dedicated_server_ipv4": "1.2.3.4",
|
||||||
|
"__meta_ovhcloud_dedicated_server_ipv6": "",
|
||||||
|
"__meta_ovhcloud_dedicated_server_link_speed": "123",
|
||||||
|
"__meta_ovhcloud_dedicated_server_name": "abcde",
|
||||||
|
"__meta_ovhcloud_dedicated_server_no_intervention": "false",
|
||||||
|
"__meta_ovhcloud_dedicated_server_os": "debian11_64",
|
||||||
|
"__meta_ovhcloud_dedicated_server_rack": "TESTRACK",
|
||||||
|
"__meta_ovhcloud_dedicated_server_reverse": "abcde-rev",
|
||||||
|
"__meta_ovhcloud_dedicated_server_server_id": "1234",
|
||||||
|
"__meta_ovhcloud_dedicated_server_state": "test",
|
||||||
|
"__meta_ovhcloud_dedicated_server_support_level": "pro",
|
||||||
|
"instance": "abcde",
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
t.Run(fmt.Sprintf("item %d", i), func(t *testing.T) {
|
||||||
|
require.Equal(t, lbls, targetGroup.Targets[i])
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func MockDedicatedAPI(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Header.Get("X-Ovh-Application") != ovhcloudApplicationKeyTest {
|
||||||
|
http.Error(w, "bad application key", http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
if string(r.URL.Path) == "/dedicated/server" {
|
||||||
|
dedicatedServersList, err := os.ReadFile("testdata/dedicated_server/dedicated_servers.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServersList)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if string(r.URL.Path) == "/dedicated/server/abcde" {
|
||||||
|
dedicatedServer, err := os.ReadFile("testdata/dedicated_server/dedicated_servers_details.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServer)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if string(r.URL.Path) == "/dedicated/server/abcde/ips" {
|
||||||
|
dedicatedServerIPs, err := os.ReadFile("testdata/dedicated_server/dedicated_servers_abcde_ips.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServerIPs)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
155
discovery/ovhcloud/ovhcloud.go
Normal file
155
discovery/ovhcloud/ovhcloud.go
Normal file
|
@ -0,0 +1,155 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"net/netip"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/go-kit/log"
|
||||||
|
"github.com/ovh/go-ovh/ovh"
|
||||||
|
"github.com/prometheus/common/config"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/discovery"
|
||||||
|
"github.com/prometheus/prometheus/discovery/refresh"
|
||||||
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
// metaLabelPrefix is the meta prefix used for all meta labels in this discovery.
|
||||||
|
const metaLabelPrefix = model.MetaLabelPrefix + "ovhcloud_"
|
||||||
|
|
||||||
|
type refresher interface {
|
||||||
|
refresh(context.Context) ([]*targetgroup.Group, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
var DefaultSDConfig = SDConfig{
|
||||||
|
Endpoint: "ovh-eu",
|
||||||
|
RefreshInterval: model.Duration(60 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
|
// SDConfig defines the Service Discovery struct used for configuration.
|
||||||
|
type SDConfig struct {
|
||||||
|
Endpoint string `yaml:"endpoint"`
|
||||||
|
ApplicationKey string `yaml:"application_key"`
|
||||||
|
ApplicationSecret config.Secret `yaml:"application_secret"`
|
||||||
|
ConsumerKey config.Secret `yaml:"consumer_key"`
|
||||||
|
RefreshInterval model.Duration `yaml:"refresh_interval"`
|
||||||
|
Service string `yaml:"service"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name implements the Discoverer interface.
|
||||||
|
func (c SDConfig) Name() string {
|
||||||
|
return "ovhcloud"
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnmarshalYAML implements the yaml.Unmarshaler interface.
|
||||||
|
func (c *SDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
|
*c = DefaultSDConfig
|
||||||
|
type plain SDConfig
|
||||||
|
err := unmarshal((*plain)(c))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Endpoint == "" {
|
||||||
|
return errors.New("endpoint can not be empty")
|
||||||
|
}
|
||||||
|
if c.ApplicationKey == "" {
|
||||||
|
return errors.New("application key can not be empty")
|
||||||
|
}
|
||||||
|
if c.ApplicationSecret == "" {
|
||||||
|
return errors.New("application secret can not be empty")
|
||||||
|
}
|
||||||
|
if c.ConsumerKey == "" {
|
||||||
|
return errors.New("consumer key can not be empty")
|
||||||
|
}
|
||||||
|
switch c.Service {
|
||||||
|
case "dedicated_server", "vps":
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown service: %v", c.Service)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateClient creates a new ovh client configured with given credentials.
|
||||||
|
func createClient(config *SDConfig) (*ovh.Client, error) {
|
||||||
|
return ovh.NewClient(config.Endpoint, config.ApplicationKey, string(config.ApplicationSecret), string(config.ConsumerKey))
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDiscoverer returns a Discoverer for the Config.
|
||||||
|
func (c *SDConfig) NewDiscoverer(options discovery.DiscovererOptions) (discovery.Discoverer, error) {
|
||||||
|
return NewDiscovery(c, options.Logger)
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
discovery.RegisterConfig(&SDConfig{})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseIPList parses ip list as they can have different formats.
|
||||||
|
func parseIPList(ipList []string) ([]netip.Addr, error) {
|
||||||
|
var ipAddresses []netip.Addr
|
||||||
|
for _, ip := range ipList {
|
||||||
|
ipAddr, err := netip.ParseAddr(ip)
|
||||||
|
if err != nil {
|
||||||
|
ipPrefix, err := netip.ParsePrefix(ip)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.New("could not parse IP addresses from list")
|
||||||
|
}
|
||||||
|
if ipPrefix.IsValid() {
|
||||||
|
netmask := ipPrefix.Bits()
|
||||||
|
if netmask != 32 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ipAddr = ipPrefix.Addr()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ipAddr.IsValid() && !ipAddr.IsUnspecified() {
|
||||||
|
ipAddresses = append(ipAddresses, ipAddr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(ipAddresses) == 0 {
|
||||||
|
return nil, errors.New("could not parse IP addresses from list")
|
||||||
|
}
|
||||||
|
return ipAddresses, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newRefresher(conf *SDConfig, logger log.Logger) (refresher, error) {
|
||||||
|
switch conf.Service {
|
||||||
|
case "vps":
|
||||||
|
return newVpsDiscovery(conf, logger), nil
|
||||||
|
case "dedicated_server":
|
||||||
|
return newDedicatedServerDiscovery(conf, logger), nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unknown OVHcloud discovery service '%s'", conf.Service)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDiscovery returns a new OVHcloud Discoverer which periodically refreshes its targets.
|
||||||
|
func NewDiscovery(conf *SDConfig, logger log.Logger) (*refresh.Discovery, error) {
|
||||||
|
r, err := newRefresher(conf, logger)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return refresh.NewDiscovery(
|
||||||
|
logger,
|
||||||
|
"ovhcloud",
|
||||||
|
time.Duration(conf.RefreshInterval),
|
||||||
|
r.refresh,
|
||||||
|
), nil
|
||||||
|
}
|
129
discovery/ovhcloud/ovhcloud_test.go
Normal file
129
discovery/ovhcloud/ovhcloud_test.go
Normal file
|
@ -0,0 +1,129 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/prometheus/common/config"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"gopkg.in/yaml.v2"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/discovery"
|
||||||
|
"github.com/prometheus/prometheus/util/testutil"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ovhcloudApplicationKeyTest = "TDPKJdwZwAQPwKX2"
|
||||||
|
ovhcloudApplicationSecretTest = config.Secret("9ufkBmLaTQ9nz5yMUlg79taH0GNnzDjk")
|
||||||
|
ovhcloudConsumerKeyTest = config.Secret("5mBuy6SUQcRw2ZUxg0cG68BoDKpED4KY")
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
mockURL = "https://localhost:1234"
|
||||||
|
)
|
||||||
|
|
||||||
|
func getMockConf(service string) (SDConfig, error) {
|
||||||
|
confString := fmt.Sprintf(`
|
||||||
|
endpoint: %s
|
||||||
|
application_key: %s
|
||||||
|
application_secret: %s
|
||||||
|
consumer_key: %s
|
||||||
|
refresh_interval: 1m
|
||||||
|
service: %s
|
||||||
|
`, mockURL, ovhcloudApplicationKeyTest, ovhcloudApplicationSecretTest, ovhcloudConsumerKeyTest, service)
|
||||||
|
|
||||||
|
return getMockConfFromString(confString)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getMockConfFromString(confString string) (SDConfig, error) {
|
||||||
|
var conf SDConfig
|
||||||
|
err := yaml.UnmarshalStrict([]byte(confString), &conf)
|
||||||
|
return conf, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestErrorInitClient(t *testing.T) {
|
||||||
|
confString := fmt.Sprintf(`
|
||||||
|
endpoint: %s
|
||||||
|
|
||||||
|
`, mockURL)
|
||||||
|
|
||||||
|
conf, _ := getMockConfFromString(confString)
|
||||||
|
|
||||||
|
_, err := createClient(&conf)
|
||||||
|
|
||||||
|
require.ErrorContains(t, err, "missing application key")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseIPs(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
input []string
|
||||||
|
want error
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "Parse IPv4 failed.",
|
||||||
|
input: []string{"A.b"},
|
||||||
|
want: errors.New("could not parse IP addresses from list"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse unspecified failed.",
|
||||||
|
input: []string{"0.0.0.0"},
|
||||||
|
want: errors.New("could not parse IP addresses from list"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse void IP failed.",
|
||||||
|
input: []string{""},
|
||||||
|
want: errors.New("could not parse IP addresses from list"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse IPv6 ok.",
|
||||||
|
input: []string{"2001:0db8:0000:0000:0000:0000:0000:0001"},
|
||||||
|
want: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse IPv6 failed.",
|
||||||
|
input: []string{"bbb:cccc:1111"},
|
||||||
|
want: errors.New("could not parse IP addresses from list"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse IPv4 bad mask.",
|
||||||
|
input: []string{"192.0.2.1/23"},
|
||||||
|
want: errors.New("could not parse IP addresses from list"),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Parse IPv4 ok.",
|
||||||
|
input: []string{"192.0.2.1/32"},
|
||||||
|
want: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
_, err := parseIPList(tc.input)
|
||||||
|
require.Equal(t, tc.want, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDiscoverer(t *testing.T) {
|
||||||
|
conf, _ := getMockConf("vps")
|
||||||
|
logger := testutil.NewLogger(t)
|
||||||
|
_, err := conf.NewDiscoverer(discovery.DiscovererOptions{
|
||||||
|
Logger: logger,
|
||||||
|
})
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
3
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers.json
vendored
Normal file
3
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers.json
vendored
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
[
|
||||||
|
"abcde"
|
||||||
|
]
|
4
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers_abcde_ips.json
vendored
Normal file
4
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers_abcde_ips.json
vendored
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
[
|
||||||
|
"1.2.3.4/32",
|
||||||
|
"2001:0db8:0000:0000:0000:0000:0000:0001/64"
|
||||||
|
]
|
20
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers_details.json
vendored
Normal file
20
discovery/ovhcloud/testdata/dedicated_server/dedicated_servers_details.json
vendored
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
{
|
||||||
|
"ip": "1.2.3.4",
|
||||||
|
"newUpgradeSystem": true,
|
||||||
|
"commercialRange": "Advance-1 Gen 2",
|
||||||
|
"rack": "TESTRACK",
|
||||||
|
"rescueMail": null,
|
||||||
|
"supportLevel": "pro",
|
||||||
|
"bootId": 1,
|
||||||
|
"linkSpeed": 123,
|
||||||
|
"professionalUse": false,
|
||||||
|
"monitoring": true,
|
||||||
|
"noIntervention": false,
|
||||||
|
"name": "abcde",
|
||||||
|
"rootDevice": null,
|
||||||
|
"state": "test",
|
||||||
|
"datacenter": "gra3",
|
||||||
|
"os": "debian11_64",
|
||||||
|
"reverse": "abcde-rev",
|
||||||
|
"serverId": 1234
|
||||||
|
}
|
3
discovery/ovhcloud/testdata/vps/vps.json
vendored
Normal file
3
discovery/ovhcloud/testdata/vps/vps.json
vendored
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
[
|
||||||
|
"abc"
|
||||||
|
]
|
4
discovery/ovhcloud/testdata/vps/vps_abc_ips.json
vendored
Normal file
4
discovery/ovhcloud/testdata/vps/vps_abc_ips.json
vendored
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
[
|
||||||
|
"192.0.2.1/32",
|
||||||
|
"2001:0db1:0000:0000:0000:0000:0000:0001/64"
|
||||||
|
]
|
25
discovery/ovhcloud/testdata/vps/vps_details.json
vendored
Normal file
25
discovery/ovhcloud/testdata/vps/vps_details.json
vendored
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
{
|
||||||
|
"offerType": "ssd",
|
||||||
|
"monitoringIpBlocks": [],
|
||||||
|
"displayName": "abc",
|
||||||
|
"zone": "zone",
|
||||||
|
"cluster": "cluster_test",
|
||||||
|
"slaMonitoring": false,
|
||||||
|
"name": "abc",
|
||||||
|
"vcore": 1,
|
||||||
|
"state": "running",
|
||||||
|
"keymap": null,
|
||||||
|
"netbootMode": "local",
|
||||||
|
"model": {
|
||||||
|
"name": "vps-value-1-2-40",
|
||||||
|
"availableOptions": [],
|
||||||
|
"maximumAdditionnalIp": 16,
|
||||||
|
"offer": "VPS abc",
|
||||||
|
"disk": 40,
|
||||||
|
"version": "2019v1",
|
||||||
|
"vcore": 1,
|
||||||
|
"memory": 2048,
|
||||||
|
"datacenter": []
|
||||||
|
},
|
||||||
|
"memoryLimit": 2048
|
||||||
|
}
|
187
discovery/ovhcloud/vps.go
Normal file
187
discovery/ovhcloud/vps.go
Normal file
|
@ -0,0 +1,187 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/netip"
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
|
||||||
|
"github.com/go-kit/log"
|
||||||
|
"github.com/go-kit/log/level"
|
||||||
|
"github.com/ovh/go-ovh/ovh"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/discovery/refresh"
|
||||||
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
vpsAPIPath = "/vps"
|
||||||
|
vpsLabelPrefix = metaLabelPrefix + "vps_"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Model struct from API.
|
||||||
|
type vpsModel struct {
|
||||||
|
MaximumAdditionalIP int `json:"maximumAdditionnalIp"`
|
||||||
|
Offer string `json:"offer"`
|
||||||
|
Datacenter []string `json:"datacenter"`
|
||||||
|
Vcore int `json:"vcore"`
|
||||||
|
Version string `json:"version"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Disk int `json:"disk"`
|
||||||
|
Memory int `json:"memory"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// VPS struct from API. Also contains IP addresses that are fetched
|
||||||
|
// independently.
|
||||||
|
type virtualPrivateServer struct {
|
||||||
|
ips []netip.Addr
|
||||||
|
Keymap []string `json:"keymap"`
|
||||||
|
Zone string `json:"zone"`
|
||||||
|
Model vpsModel `json:"model"`
|
||||||
|
DisplayName string `json:"displayName"`
|
||||||
|
MonitoringIPBlocks []string `json:"monitoringIpBlocks"`
|
||||||
|
Cluster string `json:"cluster"`
|
||||||
|
State string `json:"state"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
NetbootMode string `json:"netbootMode"`
|
||||||
|
MemoryLimit int `json:"memoryLimit"`
|
||||||
|
OfferType string `json:"offerType"`
|
||||||
|
Vcore int `json:"vcore"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type vpsDiscovery struct {
|
||||||
|
*refresh.Discovery
|
||||||
|
config *SDConfig
|
||||||
|
logger log.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
func newVpsDiscovery(conf *SDConfig, logger log.Logger) *vpsDiscovery {
|
||||||
|
return &vpsDiscovery{config: conf, logger: logger}
|
||||||
|
}
|
||||||
|
|
||||||
|
func getVpsDetails(client *ovh.Client, vpsName string) (*virtualPrivateServer, error) {
|
||||||
|
var vpsDetails virtualPrivateServer
|
||||||
|
vpsNamePath := path.Join(vpsAPIPath, url.QueryEscape(vpsName))
|
||||||
|
|
||||||
|
err := client.Get(vpsNamePath, &vpsDetails)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var ips []string
|
||||||
|
err = client.Get(path.Join(vpsNamePath, "ips"), &ips)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
parsedIPs, err := parseIPList(ips)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
vpsDetails.ips = parsedIPs
|
||||||
|
|
||||||
|
return &vpsDetails, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getVpsList(client *ovh.Client) ([]string, error) {
|
||||||
|
var vpsListName []string
|
||||||
|
|
||||||
|
err := client.Get(vpsAPIPath, &vpsListName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return vpsListName, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vpsDiscovery) getService() string {
|
||||||
|
return "vps"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vpsDiscovery) getSource() string {
|
||||||
|
return fmt.Sprintf("%s_%s", d.config.Name(), d.getService())
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vpsDiscovery) refresh(ctx context.Context) ([]*targetgroup.Group, error) {
|
||||||
|
client, err := createClient(d.config)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var vpsDetailedList []virtualPrivateServer
|
||||||
|
vpsList, err := getVpsList(client)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, vpsName := range vpsList {
|
||||||
|
vpsDetailed, err := getVpsDetails(client, vpsName)
|
||||||
|
if err != nil {
|
||||||
|
err := level.Warn(d.logger).Log("msg", fmt.Sprintf("%s: Could not get details of %s", d.getSource(), vpsName), "err", err.Error())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
vpsDetailedList = append(vpsDetailedList, *vpsDetailed)
|
||||||
|
}
|
||||||
|
|
||||||
|
var targets []model.LabelSet
|
||||||
|
for _, server := range vpsDetailedList {
|
||||||
|
var ipv4, ipv6 string
|
||||||
|
for _, ip := range server.ips {
|
||||||
|
if ip.Is4() {
|
||||||
|
ipv4 = ip.String()
|
||||||
|
}
|
||||||
|
if ip.Is6() {
|
||||||
|
ipv6 = ip.String()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defaultIP := ipv4
|
||||||
|
if defaultIP == "" {
|
||||||
|
defaultIP = ipv6
|
||||||
|
}
|
||||||
|
labels := model.LabelSet{
|
||||||
|
model.AddressLabel: model.LabelValue(defaultIP),
|
||||||
|
model.InstanceLabel: model.LabelValue(server.Name),
|
||||||
|
vpsLabelPrefix + "offer": model.LabelValue(server.Model.Offer),
|
||||||
|
vpsLabelPrefix + "datacenter": model.LabelValue(fmt.Sprintf("%+v", server.Model.Datacenter)),
|
||||||
|
vpsLabelPrefix + "model_vcore": model.LabelValue(fmt.Sprintf("%d", server.Model.Vcore)),
|
||||||
|
vpsLabelPrefix + "maximum_additional_ip": model.LabelValue(fmt.Sprintf("%d", server.Model.MaximumAdditionalIP)),
|
||||||
|
vpsLabelPrefix + "version": model.LabelValue(server.Model.Version),
|
||||||
|
vpsLabelPrefix + "model_name": model.LabelValue(server.Model.Name),
|
||||||
|
vpsLabelPrefix + "disk": model.LabelValue(fmt.Sprintf("%d", server.Model.Disk)),
|
||||||
|
vpsLabelPrefix + "memory": model.LabelValue(fmt.Sprintf("%d", server.Model.Memory)),
|
||||||
|
vpsLabelPrefix + "zone": model.LabelValue(server.Zone),
|
||||||
|
vpsLabelPrefix + "display_name": model.LabelValue(server.DisplayName),
|
||||||
|
vpsLabelPrefix + "cluster": model.LabelValue(server.Cluster),
|
||||||
|
vpsLabelPrefix + "state": model.LabelValue(server.State),
|
||||||
|
vpsLabelPrefix + "name": model.LabelValue(server.Name),
|
||||||
|
vpsLabelPrefix + "netboot_mode": model.LabelValue(server.NetbootMode),
|
||||||
|
vpsLabelPrefix + "memory_limit": model.LabelValue(fmt.Sprintf("%d", server.MemoryLimit)),
|
||||||
|
vpsLabelPrefix + "offer_type": model.LabelValue(server.OfferType),
|
||||||
|
vpsLabelPrefix + "vcore": model.LabelValue(fmt.Sprintf("%d", server.Vcore)),
|
||||||
|
vpsLabelPrefix + "ipv4": model.LabelValue(ipv4),
|
||||||
|
vpsLabelPrefix + "ipv6": model.LabelValue(ipv6),
|
||||||
|
}
|
||||||
|
|
||||||
|
targets = append(targets, labels)
|
||||||
|
}
|
||||||
|
|
||||||
|
return []*targetgroup.Group{{Source: d.getSource(), Targets: targets}}, nil
|
||||||
|
}
|
130
discovery/ovhcloud/vps_test.go
Normal file
130
discovery/ovhcloud/vps_test.go
Normal file
|
@ -0,0 +1,130 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package ovhcloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
yaml "gopkg.in/yaml.v2"
|
||||||
|
|
||||||
|
"github.com/go-kit/log"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestOvhCloudVpsRefresh(t *testing.T) {
|
||||||
|
var cfg SDConfig
|
||||||
|
|
||||||
|
mock := httptest.NewServer(http.HandlerFunc(MockVpsAPI))
|
||||||
|
defer mock.Close()
|
||||||
|
cfgString := fmt.Sprintf(`
|
||||||
|
---
|
||||||
|
service: vps
|
||||||
|
endpoint: %s
|
||||||
|
application_key: %s
|
||||||
|
application_secret: %s
|
||||||
|
consumer_key: %s`, mock.URL, ovhcloudApplicationKeyTest, ovhcloudApplicationSecretTest, ovhcloudConsumerKeyTest)
|
||||||
|
|
||||||
|
require.NoError(t, yaml.UnmarshalStrict([]byte(cfgString), &cfg))
|
||||||
|
|
||||||
|
d, err := newRefresher(&cfg, log.NewNopLogger())
|
||||||
|
require.NoError(t, err)
|
||||||
|
ctx := context.Background()
|
||||||
|
targetGroups, err := d.refresh(ctx)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, 1, len(targetGroups))
|
||||||
|
targetGroup := targetGroups[0]
|
||||||
|
require.NotNil(t, targetGroup)
|
||||||
|
require.NotNil(t, targetGroup.Targets)
|
||||||
|
require.Equal(t, 1, len(targetGroup.Targets))
|
||||||
|
for i, lbls := range []model.LabelSet{
|
||||||
|
{
|
||||||
|
"__address__": "192.0.2.1",
|
||||||
|
"__meta_ovhcloud_vps_ipv4": "192.0.2.1",
|
||||||
|
"__meta_ovhcloud_vps_ipv6": "",
|
||||||
|
"__meta_ovhcloud_vps_cluster": "cluster_test",
|
||||||
|
"__meta_ovhcloud_vps_datacenter": "[]",
|
||||||
|
"__meta_ovhcloud_vps_disk": "40",
|
||||||
|
"__meta_ovhcloud_vps_display_name": "abc",
|
||||||
|
"__meta_ovhcloud_vps_maximum_additional_ip": "16",
|
||||||
|
"__meta_ovhcloud_vps_memory": "2048",
|
||||||
|
"__meta_ovhcloud_vps_memory_limit": "2048",
|
||||||
|
"__meta_ovhcloud_vps_model_name": "vps-value-1-2-40",
|
||||||
|
"__meta_ovhcloud_vps_name": "abc",
|
||||||
|
"__meta_ovhcloud_vps_netboot_mode": "local",
|
||||||
|
"__meta_ovhcloud_vps_offer": "VPS abc",
|
||||||
|
"__meta_ovhcloud_vps_offer_type": "ssd",
|
||||||
|
"__meta_ovhcloud_vps_state": "running",
|
||||||
|
"__meta_ovhcloud_vps_vcore": "1",
|
||||||
|
"__meta_ovhcloud_vps_model_vcore": "1",
|
||||||
|
"__meta_ovhcloud_vps_version": "2019v1",
|
||||||
|
"__meta_ovhcloud_vps_zone": "zone",
|
||||||
|
"instance": "abc",
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
t.Run(fmt.Sprintf("item %d", i), func(t *testing.T) {
|
||||||
|
require.Equal(t, lbls, targetGroup.Targets[i])
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func MockVpsAPI(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Header.Get("X-Ovh-Application") != ovhcloudApplicationKeyTest {
|
||||||
|
http.Error(w, "bad application key", http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
if string(r.URL.Path) == "/vps" {
|
||||||
|
dedicatedServersList, err := os.ReadFile("testdata/vps/vps.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServersList)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if string(r.URL.Path) == "/vps/abc" {
|
||||||
|
dedicatedServer, err := os.ReadFile("testdata/vps/vps_details.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServer)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if string(r.URL.Path) == "/vps/abc/ips" {
|
||||||
|
dedicatedServerIPs, err := os.ReadFile("testdata/vps/vps_abc_ips.json")
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, err = w.Write(dedicatedServerIPs)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -114,7 +114,10 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
|
||||||
|
|
||||||
func (d *Discovery) refresh(ctx context.Context) ([]*targetgroup.Group, error) {
|
func (d *Discovery) refresh(ctx context.Context) ([]*targetgroup.Group, error) {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
defer d.duration.Observe(time.Since(now).Seconds())
|
defer func() {
|
||||||
|
d.duration.Observe(time.Since(now).Seconds())
|
||||||
|
}()
|
||||||
|
|
||||||
tgs, err := d.refreshf(ctx)
|
tgs, err := d.refreshf(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
d.failures.Inc()
|
d.failures.Inc()
|
||||||
|
|
|
@ -294,6 +294,10 @@ nomad_sd_configs:
|
||||||
openstack_sd_configs:
|
openstack_sd_configs:
|
||||||
[ - <openstack_sd_config> ... ]
|
[ - <openstack_sd_config> ... ]
|
||||||
|
|
||||||
|
# List of OVHcloud service discovery configurations.
|
||||||
|
ovhcloud_sd_configs:
|
||||||
|
[ - <ovhcloud_sd_config> ... ]
|
||||||
|
|
||||||
# List of PuppetDB service discovery configurations.
|
# List of PuppetDB service discovery configurations.
|
||||||
puppetdb_sd_configs:
|
puppetdb_sd_configs:
|
||||||
[ - <puppetdb_sd_config> ... ]
|
[ - <puppetdb_sd_config> ... ]
|
||||||
|
@ -518,6 +522,7 @@ The following meta labels are available on targets during [relabeling](#relabel_
|
||||||
* `__meta_consul_address`: the address of the target
|
* `__meta_consul_address`: the address of the target
|
||||||
* `__meta_consul_dc`: the datacenter name for the target
|
* `__meta_consul_dc`: the datacenter name for the target
|
||||||
* `__meta_consul_health`: the health status of the service
|
* `__meta_consul_health`: the health status of the service
|
||||||
|
* `__meta_consul_partition`: the admin partition name where the service is registered
|
||||||
* `__meta_consul_metadata_<key>`: each node metadata key value of the target
|
* `__meta_consul_metadata_<key>`: each node metadata key value of the target
|
||||||
* `__meta_consul_node`: the node name defined for the target
|
* `__meta_consul_node`: the node name defined for the target
|
||||||
* `__meta_consul_service_address`: the service address of the target
|
* `__meta_consul_service_address`: the service address of the target
|
||||||
|
@ -536,6 +541,8 @@ The following meta labels are available on targets during [relabeling](#relabel_
|
||||||
[ datacenter: <string> ]
|
[ datacenter: <string> ]
|
||||||
# Namespaces are only supported in Consul Enterprise.
|
# Namespaces are only supported in Consul Enterprise.
|
||||||
[ namespace: <string> ]
|
[ namespace: <string> ]
|
||||||
|
# Admin Partitions are only supported in Consul Enterprise.
|
||||||
|
[ partition: <string> ]
|
||||||
[ scheme: <string> | default = "http" ]
|
[ scheme: <string> | default = "http" ]
|
||||||
# The username and password fields are deprecated in favor of the basic_auth configuration.
|
# The username and password fields are deprecated in favor of the basic_auth configuration.
|
||||||
[ username: <string> ]
|
[ username: <string> ]
|
||||||
|
@ -1173,6 +1180,66 @@ tls_config:
|
||||||
[ <tls_config> ]
|
[ <tls_config> ]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `<ovhcloud_sd_config>`
|
||||||
|
|
||||||
|
OVHcloud SD configurations allow retrieving scrape targets from OVHcloud's [dedicated servers](https://www.ovhcloud.com/en/bare-metal/) and [VPS](https://www.ovhcloud.com/en/vps/) using
|
||||||
|
their [API](https://api.ovh.com/).
|
||||||
|
Prometheus will periodically check the REST endpoint and create a target for every discovered server.
|
||||||
|
The role will try to use the public IPv4 address as default address, if there's none it will try to use the IPv6 one. This may be changed with relabeling.
|
||||||
|
For OVHcloud's [public cloud instances](https://www.ovhcloud.com/en/public-cloud/) you can use the [openstack_sd_config](#openstack_sd_config).
|
||||||
|
|
||||||
|
#### VPS
|
||||||
|
|
||||||
|
* `__meta_ovhcloud_vps_cluster`: the cluster of the server
|
||||||
|
* `__meta_ovhcloud_vps_datacenter`: the datacenter of the server
|
||||||
|
* `__meta_ovhcloud_vps_disk`: the disk of the server
|
||||||
|
* `__meta_ovhcloud_vps_display_name`: the display name of the server
|
||||||
|
* `__meta_ovhcloud_vps_ipv4`: the IPv4 of the server
|
||||||
|
* `__meta_ovhcloud_vps_ipv6`: the IPv6 of the server
|
||||||
|
* `__meta_ovhcloud_vps_keymap`: the KVM keyboard layout of the server
|
||||||
|
* `__meta_ovhcloud_vps_maximum_additional_ip`: the maximum additional IPs of the server
|
||||||
|
* `__meta_ovhcloud_vps_memory_limit`: the memory limit of the server
|
||||||
|
* `__meta_ovhcloud_vps_memory`: the memory of the server
|
||||||
|
* `__meta_ovhcloud_vps_monitoring_ip_blocks`: the monitoring IP blocks of the server
|
||||||
|
* `__meta_ovhcloud_vps_name`: the name of the server
|
||||||
|
* `__meta_ovhcloud_vps_netboot_mode`: the netboot mode of the server
|
||||||
|
* `__meta_ovhcloud_vps_offer_type`: the offer type of the server
|
||||||
|
* `__meta_ovhcloud_vps_offer`: the offer of the server
|
||||||
|
* `__meta_ovhcloud_vps_state`: the state of the server
|
||||||
|
* `__meta_ovhcloud_vps_vcore`: the number of virtual cores of the server
|
||||||
|
* `__meta_ovhcloud_vps_version`: the version of the server
|
||||||
|
* `__meta_ovhcloud_vps_zone`: the zone of the server
|
||||||
|
|
||||||
|
#### Dedicated servers
|
||||||
|
|
||||||
|
* `__meta_ovhcloud_dedicated_server_commercial_range`: the commercial range of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_datacenter`: the datacenter of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_ipv4`: the IPv4 of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_ipv6`: the IPv6 of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_link_speed`: the link speed of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_name`: the name of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_os`: the operating system of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_rack`: the rack of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_reverse`: the reverse DNS name of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_server_id`: the ID of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_state`: the state of the server
|
||||||
|
* `__meta_ovhcloud_dedicated_server_support_level`: the support level of the server
|
||||||
|
|
||||||
|
See below for the configuration options for OVHcloud discovery:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Access key to use. https://api.ovh.com
|
||||||
|
application_key: <string>
|
||||||
|
application_secret: <secret>
|
||||||
|
consumer_key: <secret>
|
||||||
|
# Service of the targets to retrieve. Must be `vps` or `dedicated_server`.
|
||||||
|
service: <string>
|
||||||
|
# API endpoint. https://github.com/ovh/go-ovh#supported-apis
|
||||||
|
[ endpoint: <string> | default = "ovh-eu" ]
|
||||||
|
# Refresh interval to re-read the resources list.
|
||||||
|
[ refresh_interval: <duration> | default = 60s ]
|
||||||
|
```
|
||||||
|
|
||||||
### `<puppetdb_sd_config>`
|
### `<puppetdb_sd_config>`
|
||||||
|
|
||||||
PuppetDB SD configurations allow retrieving scrape targets from
|
PuppetDB SD configurations allow retrieving scrape targets from
|
||||||
|
@ -1409,7 +1476,6 @@ The labels below are only available for targets with `role` set to `hcloud`:
|
||||||
* `__meta_hetzner_hcloud_image_description`: the description of the server image
|
* `__meta_hetzner_hcloud_image_description`: the description of the server image
|
||||||
* `__meta_hetzner_hcloud_image_os_flavor`: the OS flavor of the server image
|
* `__meta_hetzner_hcloud_image_os_flavor`: the OS flavor of the server image
|
||||||
* `__meta_hetzner_hcloud_image_os_version`: the OS version of the server image
|
* `__meta_hetzner_hcloud_image_os_version`: the OS version of the server image
|
||||||
* `__meta_hetzner_hcloud_image_description`: the description of the server image
|
|
||||||
* `__meta_hetzner_hcloud_datacenter_location`: the location of the server
|
* `__meta_hetzner_hcloud_datacenter_location`: the location of the server
|
||||||
* `__meta_hetzner_hcloud_datacenter_location_network_zone`: the network zone of the server
|
* `__meta_hetzner_hcloud_datacenter_location_network_zone`: the network zone of the server
|
||||||
* `__meta_hetzner_hcloud_server_type`: the type of the server
|
* `__meta_hetzner_hcloud_server_type`: the type of the server
|
||||||
|
@ -2266,7 +2332,7 @@ tls_config:
|
||||||
### `<serverset_sd_config>`
|
### `<serverset_sd_config>`
|
||||||
|
|
||||||
Serverset SD configurations allow retrieving scrape targets from [Serversets]
|
Serverset SD configurations allow retrieving scrape targets from [Serversets]
|
||||||
(https://github.com/twitter/finagle/tree/master/finagle-serversets) which are
|
(https://github.com/twitter/finagle/tree/develop/finagle-serversets) which are
|
||||||
stored in [Zookeeper](https://zookeeper.apache.org/). Serversets are commonly
|
stored in [Zookeeper](https://zookeeper.apache.org/). Serversets are commonly
|
||||||
used by [Finagle](https://twitter.github.io/finagle/) and
|
used by [Finagle](https://twitter.github.io/finagle/) and
|
||||||
[Aurora](https://aurora.apache.org/).
|
[Aurora](https://aurora.apache.org/).
|
||||||
|
@ -2790,6 +2856,8 @@ anchored on both ends. To un-anchor the regex, use `.*<regex>.*`.
|
||||||
* `uppercase`: Maps the concatenated `source_labels` to their upper case.
|
* `uppercase`: Maps the concatenated `source_labels` to their upper case.
|
||||||
* `keep`: Drop targets for which `regex` does not match the concatenated `source_labels`.
|
* `keep`: Drop targets for which `regex` does not match the concatenated `source_labels`.
|
||||||
* `drop`: Drop targets for which `regex` matches the concatenated `source_labels`.
|
* `drop`: Drop targets for which `regex` matches the concatenated `source_labels`.
|
||||||
|
* `keepequal`: Drop targets for which the concatenated `source_labels` do not match `target_label`.
|
||||||
|
* `dropequal`: Drop targets for which the concatenated `source_labels` do match `target_label`.
|
||||||
* `hashmod`: Set `target_label` to the `modulus` of a hash of the concatenated `source_labels`.
|
* `hashmod`: Set `target_label` to the `modulus` of a hash of the concatenated `source_labels`.
|
||||||
* `labelmap`: Match `regex` against all source label names, not just those specified in `source_labels`. Then
|
* `labelmap`: Match `regex` against all source label names, not just those specified in `source_labels`. Then
|
||||||
copy the values of the matching labels to label names given by `replacement` with match
|
copy the values of the matching labels to label names given by `replacement` with match
|
||||||
|
@ -2962,6 +3030,10 @@ nomad_sd_configs:
|
||||||
openstack_sd_configs:
|
openstack_sd_configs:
|
||||||
[ - <openstack_sd_config> ... ]
|
[ - <openstack_sd_config> ... ]
|
||||||
|
|
||||||
|
# List of OVHcloud service discovery configurations.
|
||||||
|
ovhcloud_sd_configs:
|
||||||
|
[ - <ovhcloud_sd_config> ... ]
|
||||||
|
|
||||||
# List of PuppetDB service discovery configurations.
|
# List of PuppetDB service discovery configurations.
|
||||||
puppetdb_sd_configs:
|
puppetdb_sd_configs:
|
||||||
[ - <puppetdb_sd_config> ... ]
|
[ - <puppetdb_sd_config> ... ]
|
||||||
|
@ -3028,6 +3100,9 @@ write_relabel_configs:
|
||||||
# Enables sending of exemplars over remote write. Note that exemplar storage itself must be enabled for exemplars to be scraped in the first place.
|
# Enables sending of exemplars over remote write. Note that exemplar storage itself must be enabled for exemplars to be scraped in the first place.
|
||||||
[ send_exemplars: <boolean> | default = false ]
|
[ send_exemplars: <boolean> | default = false ]
|
||||||
|
|
||||||
|
# Enables sending of native histograms, also known as sparse histograms, over remote write.
|
||||||
|
[ send_native_histograms: <boolean> | default = false ]
|
||||||
|
|
||||||
# Sets the `Authorization` header on every remote write request with the
|
# Sets the `Authorization` header on every remote write request with the
|
||||||
# configured username and password.
|
# configured username and password.
|
||||||
# password and password_file are mutually exclusive.
|
# password and password_file are mutually exclusive.
|
||||||
|
|
|
@ -17,6 +17,10 @@ Rule files use YAML.
|
||||||
The rule files can be reloaded at runtime by sending `SIGHUP` to the Prometheus
|
The rule files can be reloaded at runtime by sending `SIGHUP` to the Prometheus
|
||||||
process. The changes are only applied if all rule files are well-formatted.
|
process. The changes are only applied if all rule files are well-formatted.
|
||||||
|
|
||||||
|
_Note about native histograms (experimental feature): Rules evaluating to
|
||||||
|
native histograms do not yet work as expected. Instead of a native histogram,
|
||||||
|
the sample stored is just a floating point value of zero._
|
||||||
|
|
||||||
## Syntax-checking rules
|
## Syntax-checking rules
|
||||||
|
|
||||||
To quickly check whether a rule file is syntactically correct without starting
|
To quickly check whether a rule file is syntactically correct without starting
|
||||||
|
|
|
@ -103,3 +103,26 @@ When enabled, the default ports for HTTP (`:80`) or HTTPS (`:443`) will _not_ be
|
||||||
the address used to scrape a target (the value of the `__address_` label), contrary to the default behavior.
|
the address used to scrape a target (the value of the `__address_` label), contrary to the default behavior.
|
||||||
In addition, if a default HTTP or HTTPS port has already been added either in a static configuration or
|
In addition, if a default HTTP or HTTPS port has already been added either in a static configuration or
|
||||||
by a service discovery mechanism and the respective scheme is specified (`http` or `https`), that port will be removed.
|
by a service discovery mechanism and the respective scheme is specified (`http` or `https`), that port will be removed.
|
||||||
|
|
||||||
|
## Native Histograms
|
||||||
|
|
||||||
|
`--enable-feature=native-histograms`
|
||||||
|
|
||||||
|
When enabled, Prometheus will ingest native histograms (formerly also known as
|
||||||
|
sparse histograms or high-res histograms). Native histograms are still highly
|
||||||
|
experimental. Expect breaking changes to happen (including those rendering the
|
||||||
|
TSDB unreadable).
|
||||||
|
|
||||||
|
Native histograms are currently only supported in the traditional Prometheus
|
||||||
|
protobuf exposition format. This feature flag therefore also enables a new (and
|
||||||
|
also experimental) protobuf parser, through which _all_ metrics are ingested
|
||||||
|
(i.e. not only native histograms). Prometheus will try to negotiate the
|
||||||
|
protobuf format first. The instrumented target needs to support the protobuf
|
||||||
|
format, too, _and_ it needs to expose native histograms. The protobuf format
|
||||||
|
allows to expose conventional and native histograms side by side. With this
|
||||||
|
feature flag disabled, Prometheus will continue to parse the conventional
|
||||||
|
histogram (albeit via the text format). With this flag enabled, Prometheus will
|
||||||
|
still ingest those conventional histograms that do not come with a
|
||||||
|
corresponding native histogram. However, if a native histogram is present,
|
||||||
|
Prometheus will ignore the corresponding conventional histogram, with the
|
||||||
|
notable exception of exemplars, which are always ingested.
|
||||||
|
|
|
@ -8,6 +8,9 @@ sort_rank: 6
|
||||||
Federation allows a Prometheus server to scrape selected time series from
|
Federation allows a Prometheus server to scrape selected time series from
|
||||||
another Prometheus server.
|
another Prometheus server.
|
||||||
|
|
||||||
|
_Note about native histograms (experimental feature): Federation does not
|
||||||
|
support native histograms yet._
|
||||||
|
|
||||||
## Use cases
|
## Use cases
|
||||||
|
|
||||||
There are different use cases for federation. Commonly, it is used to either
|
There are different use cases for federation. Commonly, it is used to either
|
||||||
|
|
|
@ -447,6 +447,12 @@ sample values. JSON does not support special float values such as `NaN`, `Inf`,
|
||||||
and `-Inf`, so sample values are transferred as quoted JSON strings rather than
|
and `-Inf`, so sample values are transferred as quoted JSON strings rather than
|
||||||
raw numbers.
|
raw numbers.
|
||||||
|
|
||||||
|
The keys `"histogram"` and `"histograms"` only show up if the experimental
|
||||||
|
native histograms are present in the response. Their placeholder `<histogram>`
|
||||||
|
is explained in detail in its own section below. Any one object will only have
|
||||||
|
the `"value"`/`"values"` key or the `"histogram"`/`"histograms"` key, but not
|
||||||
|
both.
|
||||||
|
|
||||||
### Range vectors
|
### Range vectors
|
||||||
|
|
||||||
Range vectors are returned as result type `matrix`. The corresponding
|
Range vectors are returned as result type `matrix`. The corresponding
|
||||||
|
@ -456,7 +462,8 @@ Range vectors are returned as result type `matrix`. The corresponding
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"metric": { "<label_name>": "<label_value>", ... },
|
"metric": { "<label_name>": "<label_value>", ... },
|
||||||
"values": [ [ <unix_time>, "<sample_value>" ], ... ]
|
"values": [ [ <unix_time>, "<sample_value>" ], ... ],
|
||||||
|
"histograms": [ [ <unix_time>, <histogram> ], ... ]
|
||||||
},
|
},
|
||||||
...
|
...
|
||||||
]
|
]
|
||||||
|
@ -471,7 +478,8 @@ Instant vectors are returned as result type `vector`. The corresponding
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"metric": { "<label_name>": "<label_value>", ... },
|
"metric": { "<label_name>": "<label_value>", ... },
|
||||||
"value": [ <unix_time>, "<sample_value>" ]
|
"value": [ <unix_time>, "<sample_value>" ],
|
||||||
|
"histogram": [ <unix_time>, <histogram> ]
|
||||||
},
|
},
|
||||||
...
|
...
|
||||||
]
|
]
|
||||||
|
@ -495,6 +503,33 @@ String results are returned as result type `string`. The corresponding
|
||||||
[ <unix_time>, "<string_value>" ]
|
[ <unix_time>, "<string_value>" ]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Native histograms
|
||||||
|
|
||||||
|
The `<histogram>` placeholder used above is formatted as follows.
|
||||||
|
|
||||||
|
_Note that native histograms are an experimental feature, and the format below
|
||||||
|
might still change._
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"count": "<count_of_observations>",
|
||||||
|
"sum": "<sum_of_observations>",
|
||||||
|
"buckets": [ [ <boundary_rule>, "<left_boundary>", "<right_boundary>", "<count_in_bucket>" ], ... ]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `<boundary_rule>` placeholder is an integer between 0 and 3 with the
|
||||||
|
following meaning:
|
||||||
|
|
||||||
|
* 0: “open left” (left boundary is exclusive, right boundary in inclusive)
|
||||||
|
* 1: “open right” (left boundary is inclusive, right boundary in exclusive)
|
||||||
|
* 2: “open both” (both boundaries are exclusive)
|
||||||
|
* 3: “closed both” (both boundaries are inclusive)
|
||||||
|
|
||||||
|
Note that with the currently implemented bucket schemas, positive buckets are
|
||||||
|
“open left”, negative buckets are “open right”, and the zero bucket (with a
|
||||||
|
negative left boundary and a positive right boundary) is “closed both”.
|
||||||
|
|
||||||
## Targets
|
## Targets
|
||||||
|
|
||||||
The following endpoint returns an overview of the current state of the
|
The following endpoint returns an overview of the current state of the
|
||||||
|
|
|
@ -32,6 +32,16 @@ expression), only some of these types are legal as the result from a
|
||||||
user-specified expression. For example, an expression that returns an instant
|
user-specified expression. For example, an expression that returns an instant
|
||||||
vector is the only type that can be directly graphed.
|
vector is the only type that can be directly graphed.
|
||||||
|
|
||||||
|
_Notes about the experimental native histograms:_
|
||||||
|
|
||||||
|
* Ingesting native histograms has to be enabled via a [feature
|
||||||
|
flag](../feature_flags/#native-histograms).
|
||||||
|
* Once native histograms have been ingested into the TSDB (and even after
|
||||||
|
disabling the feature flag again), both instant vectors and range vectors may
|
||||||
|
now contain samples that aren't simple floating point numbers (float samples)
|
||||||
|
but complete histograms (histogram samples). A vector may contain a mix of
|
||||||
|
float samples and histogram samples.
|
||||||
|
|
||||||
## Literals
|
## Literals
|
||||||
|
|
||||||
### String literals
|
### String literals
|
||||||
|
|
|
@ -11,6 +11,22 @@ instant-vector)`. This means that there is one argument `v` which is an instant
|
||||||
vector, which if not provided it will default to the value of the expression
|
vector, which if not provided it will default to the value of the expression
|
||||||
`vector(time())`.
|
`vector(time())`.
|
||||||
|
|
||||||
|
_Notes about the experimental native histograms:_
|
||||||
|
|
||||||
|
* Ingesting native histograms has to be enabled via a [feature
|
||||||
|
flag](../feature_flags/#native-histograms). As long as no native histograms
|
||||||
|
have been ingested into the TSDB, all functions will behave as usual.
|
||||||
|
* Functions that do not explicitly mention native histograms in their
|
||||||
|
documentation (see below) effectively treat a native histogram as a float
|
||||||
|
sample of value 0. (This is confusing and will change before native
|
||||||
|
histograms become a stable feature.)
|
||||||
|
* Functions that do already act on native histograms might still change their
|
||||||
|
behavior in the future.
|
||||||
|
* If a function requires the same bucket layout between multiple native
|
||||||
|
histograms it acts on, it will automatically convert them
|
||||||
|
appropriately. (With the currently supported bucket schemas, that's always
|
||||||
|
possible.)
|
||||||
|
|
||||||
## `abs()`
|
## `abs()`
|
||||||
|
|
||||||
`abs(v instant-vector)` returns the input vector with all sample values converted to
|
`abs(v instant-vector)` returns the input vector with all sample values converted to
|
||||||
|
@ -19,8 +35,8 @@ their absolute value.
|
||||||
## `absent()`
|
## `absent()`
|
||||||
|
|
||||||
`absent(v instant-vector)` returns an empty vector if the vector passed to it
|
`absent(v instant-vector)` returns an empty vector if the vector passed to it
|
||||||
has any elements and a 1-element vector with the value 1 if the vector passed to
|
has any elements (floats or native histograms) and a 1-element vector with the
|
||||||
it has no elements.
|
value 1 if the vector passed to it has no elements.
|
||||||
|
|
||||||
This is useful for alerting on when no time series exist for a given metric name
|
This is useful for alerting on when no time series exist for a given metric name
|
||||||
and label combination.
|
and label combination.
|
||||||
|
@ -42,8 +58,8 @@ of the 1-element output vector from the input vector.
|
||||||
## `absent_over_time()`
|
## `absent_over_time()`
|
||||||
|
|
||||||
`absent_over_time(v range-vector)` returns an empty vector if the range vector
|
`absent_over_time(v range-vector)` returns an empty vector if the range vector
|
||||||
passed to it has any elements and a 1-element vector with the value 1 if the
|
passed to it has any elements (floats or native histograms) and a 1-element
|
||||||
range vector passed to it has no elements.
|
vector with the value 1 if the range vector passed to it has no elements.
|
||||||
|
|
||||||
This is useful for alerting on when no time series exist for a given metric name
|
This is useful for alerting on when no time series exist for a given metric name
|
||||||
and label combination for a certain amount of time.
|
and label combination for a certain amount of time.
|
||||||
|
@ -130,7 +146,14 @@ between now and 2 hours ago:
|
||||||
delta(cpu_temp_celsius{host="zeus"}[2h])
|
delta(cpu_temp_celsius{host="zeus"}[2h])
|
||||||
```
|
```
|
||||||
|
|
||||||
`delta` should only be used with gauges.
|
`delta` acts on native histograms by calculating a new histogram where each
|
||||||
|
compononent (sum and count of observations, buckets) is the difference between
|
||||||
|
the respective component in the first and last native histogram in
|
||||||
|
`v`. However, each element in `v` that contains a mix of float and native
|
||||||
|
histogram samples within the range, will be missing from the result vector.
|
||||||
|
|
||||||
|
`delta` should only be used with gauges and native histograms where the
|
||||||
|
components behave like gauges (so-called gauge histograms).
|
||||||
|
|
||||||
## `deriv()`
|
## `deriv()`
|
||||||
|
|
||||||
|
@ -154,53 +177,148 @@ Special cases are:
|
||||||
`floor(v instant-vector)` rounds the sample values of all elements in `v` down
|
`floor(v instant-vector)` rounds the sample values of all elements in `v` down
|
||||||
to the nearest integer.
|
to the nearest integer.
|
||||||
|
|
||||||
|
## `histogram_count()` and `histogram_sum()`
|
||||||
|
|
||||||
|
_Both functions only act on native histograms, which are an experimental
|
||||||
|
feature. The behavior of these functions may change in future versions of
|
||||||
|
Prometheus, including their removal from PromQL._
|
||||||
|
|
||||||
|
`histogram_count(v instant-vector)` returns the count of observations stored in
|
||||||
|
a native histogram. Samples that are not native histograms are ignored and do
|
||||||
|
not show up in the returned vector.
|
||||||
|
|
||||||
|
Similarly, `histogram_sum(v instant-vector)` returns the sum of observations
|
||||||
|
stored in a native histogram.
|
||||||
|
|
||||||
|
Use `histogram_count` in the following way to calculate a rate of observations
|
||||||
|
(in this case corresponding to “requests per second”) from a native histogram:
|
||||||
|
|
||||||
|
histogram_count(rate(http_request_duration_seconds[10m]))
|
||||||
|
|
||||||
|
The additional use of `histogram_sum` enables the calculation of the average of
|
||||||
|
observed values (in this case corresponding to “average request duration”):
|
||||||
|
|
||||||
|
histogram_sum(rate(http_request_duration_seconds[10m]))
|
||||||
|
/
|
||||||
|
histogram_count(rate(http_request_duration_seconds[10m]))
|
||||||
|
|
||||||
|
## `histogram_fraction()`
|
||||||
|
|
||||||
|
_This function only acts on native histograms, which are an experimental
|
||||||
|
feature. The behavior of this function may change in future versions of
|
||||||
|
Prometheus, including its removal from PromQL._
|
||||||
|
|
||||||
|
For a native histogram, `histogram_fraction(lower scalar, upper scalar, v
|
||||||
|
instant-vector)` returns the estimated fraction of observations between the
|
||||||
|
provided lower and upper values. Samples that are not native histograms are
|
||||||
|
ignored and do not show up in the returned vector.
|
||||||
|
|
||||||
|
For example, the following expression calculates the fraction of HTTP requests
|
||||||
|
over the last hour that took 200ms or less:
|
||||||
|
|
||||||
|
histogram_fraction(0, 0.2, rate(http_request_duration_seconds[1h]))
|
||||||
|
|
||||||
|
The error of the estimation depends on the resolution of the underlying native
|
||||||
|
histogram and how closely the provided boundaries are aligned with the bucket
|
||||||
|
boundaries in the histogram.
|
||||||
|
|
||||||
|
`+Inf` and `-Inf` are valid boundary values. For example, if the histogram in
|
||||||
|
the expression above included negative observations (which shouldn't be the
|
||||||
|
case for request durations), the appropriate lower boundary to include all
|
||||||
|
observations less than or equal 0.2 would be `-Inf` rather than `0`.
|
||||||
|
|
||||||
|
Whether the provided boundaries are inclusive or exclusive is only relevant if
|
||||||
|
the provided boundaries are precisely aligned with bucket boundaries in the
|
||||||
|
underlying native histogram. In this case, the behavior depends on the schema
|
||||||
|
definition of the histogram. The currently supported schemas all feature
|
||||||
|
inclusive upper boundaries and exclusive lower boundaries for positive values
|
||||||
|
(and vice versa for negative values). Without a precise alignment of
|
||||||
|
boundaries, the function uses linear interpolation to estimate the
|
||||||
|
fraction. With the resulting uncertainty, it becomes irrelevant if the
|
||||||
|
boundaries are inclusive or exclusive.
|
||||||
|
|
||||||
## `histogram_quantile()`
|
## `histogram_quantile()`
|
||||||
|
|
||||||
`histogram_quantile(φ scalar, b instant-vector)` calculates the φ-quantile (0 ≤ φ
|
`histogram_quantile(φ scalar, b instant-vector)` calculates the φ-quantile (0 ≤
|
||||||
≤ 1) from the buckets `b` of a
|
φ ≤ 1) from a [conventional
|
||||||
[histogram](https://prometheus.io/docs/concepts/metric_types/#histogram). (See
|
histogram](https://prometheus.io/docs/concepts/metric_types/#histogram) or from
|
||||||
[histograms and summaries](https://prometheus.io/docs/practices/histograms) for
|
a native histogram. (See [histograms and
|
||||||
a detailed explanation of φ-quantiles and the usage of the histogram metric type
|
summaries](https://prometheus.io/docs/practices/histograms) for a detailed
|
||||||
in general.) The samples in `b` are the counts of observations in each bucket.
|
explanation of φ-quantiles and the usage of the (conventional) histogram metric
|
||||||
Each sample must have a label `le` where the label value denotes the inclusive
|
type in general.)
|
||||||
upper bound of the bucket. (Samples without such a label are silently ignored.)
|
|
||||||
The [histogram metric type](https://prometheus.io/docs/concepts/metric_types/#histogram)
|
_Note that native histograms are an experimental feature. The behavior of this
|
||||||
automatically provides time series with the `_bucket` suffix and the appropriate
|
function when dealing with native histograms may change in future versions of
|
||||||
labels.
|
Prometheus._
|
||||||
|
|
||||||
|
The conventional float samples in `b` are considered the counts of observations
|
||||||
|
in each bucket of one or more conventional histograms. Each float sample must
|
||||||
|
have a label `le` where the label value denotes the inclusive upper bound of
|
||||||
|
the bucket. (Float samples without such a label are silently ignored.) The
|
||||||
|
other labels and the metric name are used to identify the buckets belonging to
|
||||||
|
each conventional histogram. The [histogram metric
|
||||||
|
type](https://prometheus.io/docs/concepts/metric_types/#histogram)
|
||||||
|
automatically provides time series with the `_bucket` suffix and the
|
||||||
|
appropriate labels.
|
||||||
|
|
||||||
|
The native histogram samples in `b` are treated each individually as a separate
|
||||||
|
histogram to calculate the quantile from.
|
||||||
|
|
||||||
|
As long as no naming collisions arise, `b` may contain a mix of conventional
|
||||||
|
and native histograms.
|
||||||
|
|
||||||
Use the `rate()` function to specify the time window for the quantile
|
Use the `rate()` function to specify the time window for the quantile
|
||||||
calculation.
|
calculation.
|
||||||
|
|
||||||
Example: A histogram metric is called `http_request_duration_seconds`. To
|
Example: A histogram metric is called `http_request_duration_seconds` (and
|
||||||
calculate the 90th percentile of request durations over the last 10m, use the
|
therefore the metric name for the buckets of a conventional histogram is
|
||||||
following expression:
|
`http_request_duration_seconds_bucket`). To calculate the 90th percentile of request
|
||||||
|
durations over the last 10m, use the following expression in case
|
||||||
|
`http_request_duration_seconds` is a conventional histogram:
|
||||||
|
|
||||||
histogram_quantile(0.9, rate(http_request_duration_seconds_bucket[10m]))
|
histogram_quantile(0.9, rate(http_request_duration_seconds_bucket[10m]))
|
||||||
|
|
||||||
|
For a native histogram, use the following expression instead:
|
||||||
|
|
||||||
|
histogram_quantile(0.9, rate(http_request_duration_seconds[10m]))
|
||||||
|
|
||||||
The quantile is calculated for each label combination in
|
The quantile is calculated for each label combination in
|
||||||
`http_request_duration_seconds`. To aggregate, use the `sum()` aggregator
|
`http_request_duration_seconds`. To aggregate, use the `sum()` aggregator
|
||||||
around the `rate()` function. Since the `le` label is required by
|
around the `rate()` function. Since the `le` label is required by
|
||||||
`histogram_quantile()`, it has to be included in the `by` clause. The following
|
`histogram_quantile()` to deal with conventional histograms, it has to be
|
||||||
expression aggregates the 90th percentile by `job`:
|
included in the `by` clause. The following expression aggregates the 90th
|
||||||
|
percentile by `job` for conventional histograms:
|
||||||
|
|
||||||
histogram_quantile(0.9, sum by (job, le) (rate(http_request_duration_seconds_bucket[10m])))
|
histogram_quantile(0.9, sum by (job, le) (rate(http_request_duration_seconds_bucket[10m])))
|
||||||
|
|
||||||
To aggregate everything, specify only the `le` label:
|
When aggregating native histograms, the expression simplifies to:
|
||||||
|
|
||||||
|
histogram_quantile(0.9, sum by (job) (rate(http_request_duration_seconds[10m])))
|
||||||
|
|
||||||
|
To aggregate all conventional histograms, specify only the `le` label:
|
||||||
|
|
||||||
histogram_quantile(0.9, sum by (le) (rate(http_request_duration_seconds_bucket[10m])))
|
histogram_quantile(0.9, sum by (le) (rate(http_request_duration_seconds_bucket[10m])))
|
||||||
|
|
||||||
The `histogram_quantile()` function interpolates quantile values by
|
With native histograms, aggregating everything works as usual without any `by` clause:
|
||||||
assuming a linear distribution within a bucket. The highest bucket
|
|
||||||
must have an upper bound of `+Inf`. (Otherwise, `NaN` is returned.) If
|
histogram_quantile(0.9, sum(rate(http_request_duration_seconds[10m])))
|
||||||
a quantile is located in the highest bucket, the upper bound of the
|
|
||||||
second highest bucket is returned. A lower limit of the lowest bucket
|
The `histogram_quantile()` function interpolates quantile values by
|
||||||
is assumed to be 0 if the upper bound of that bucket is greater than
|
assuming a linear distribution within a bucket.
|
||||||
0. In that case, the usual linear interpolation is applied within that
|
|
||||||
bucket. Otherwise, the upper bound of the lowest bucket is returned
|
If `b` has 0 observations, `NaN` is returned. For φ < 0, `-Inf` is
|
||||||
for quantiles located in the lowest bucket.
|
returned. For φ > 1, `+Inf` is returned. For φ = `NaN`, `NaN` is returned.
|
||||||
|
|
||||||
|
The following is only relevant for conventional histograms: If `b` contains
|
||||||
|
fewer than two buckets, `NaN` is returned. The highest bucket must have an
|
||||||
|
upper bound of `+Inf`. (Otherwise, `NaN` is returned.) If a quantile is located
|
||||||
|
in the highest bucket, the upper bound of the second highest bucket is
|
||||||
|
returned. A lower limit of the lowest bucket is assumed to be 0 if the upper
|
||||||
|
bound of that bucket is greater than
|
||||||
|
0. In that case, the usual linear interpolation is applied within that
|
||||||
|
bucket. Otherwise, the upper bound of the lowest bucket is returned for
|
||||||
|
quantiles located in the lowest bucket.
|
||||||
|
|
||||||
If `b` has 0 observations, `NaN` is returned. If `b` contains fewer than two buckets,
|
|
||||||
`NaN` is returned. For φ < 0, `-Inf` is returned. For φ > 1, `+Inf` is returned. For φ = `NaN`, `NaN` is returned.
|
|
||||||
|
|
||||||
## `holt_winters()`
|
## `holt_winters()`
|
||||||
|
|
||||||
|
@ -242,11 +360,17 @@ over the last 5 minutes, per time series in the range vector:
|
||||||
increase(http_requests_total{job="api-server"}[5m])
|
increase(http_requests_total{job="api-server"}[5m])
|
||||||
```
|
```
|
||||||
|
|
||||||
`increase` should only be used with counters. It is syntactic sugar
|
`increase` acts on native histograms by calculating a new histogram where each
|
||||||
for `rate(v)` multiplied by the number of seconds under the specified
|
compononent (sum and count of observations, buckets) is the increase between
|
||||||
time range window, and should be used primarily for human readability.
|
the respective component in the first and last native histogram in
|
||||||
Use `rate` in recording rules so that increases are tracked consistently
|
`v`. However, each element in `v` that contains a mix of float and native
|
||||||
on a per-second basis.
|
histogram samples within the range, will be missing from the result vector.
|
||||||
|
|
||||||
|
`increase` should only be used with counters and native histograms where the
|
||||||
|
components behave like counters. It is syntactic sugar for `rate(v)` multiplied
|
||||||
|
by the number of seconds under the specified time range window, and should be
|
||||||
|
used primarily for human readability. Use `rate` in recording rules so that
|
||||||
|
increases are tracked consistently on a per-second basis.
|
||||||
|
|
||||||
## `irate()`
|
## `irate()`
|
||||||
|
|
||||||
|
@ -358,8 +482,15 @@ over the last 5 minutes, per time series in the range vector:
|
||||||
rate(http_requests_total{job="api-server"}[5m])
|
rate(http_requests_total{job="api-server"}[5m])
|
||||||
```
|
```
|
||||||
|
|
||||||
`rate` should only be used with counters. It is best suited for alerting,
|
`rate` acts on native histograms by calculating a new histogram where each
|
||||||
and for graphing of slow-moving counters.
|
compononent (sum and count of observations, buckets) is the rate of increase
|
||||||
|
between the respective component in the first and last native histogram in
|
||||||
|
`v`. However, each element in `v` that contains a mix of float and native
|
||||||
|
histogram samples within the range, will be missing from the result vector.
|
||||||
|
|
||||||
|
`rate` should only be used with counters and native histograms where the
|
||||||
|
components behave like counters. It is best suited for alerting, and for
|
||||||
|
graphing of slow-moving counters.
|
||||||
|
|
||||||
Note that when combining `rate()` with an aggregation operator (e.g. `sum()`)
|
Note that when combining `rate()` with an aggregation operator (e.g. `sum()`)
|
||||||
or a function aggregating over time (any function ending in `_over_time`),
|
or a function aggregating over time (any function ending in `_over_time`),
|
||||||
|
|
|
@ -306,3 +306,31 @@ highest to lowest.
|
||||||
Operators on the same precedence level are left-associative. For example,
|
Operators on the same precedence level are left-associative. For example,
|
||||||
`2 * 3 % 2` is equivalent to `(2 * 3) % 2`. However `^` is right associative,
|
`2 * 3 % 2` is equivalent to `(2 * 3) % 2`. However `^` is right associative,
|
||||||
so `2 ^ 3 ^ 2` is equivalent to `2 ^ (3 ^ 2)`.
|
so `2 ^ 3 ^ 2` is equivalent to `2 ^ (3 ^ 2)`.
|
||||||
|
|
||||||
|
## Operators for native histograms
|
||||||
|
|
||||||
|
Native histograms are an experimental feature. Ingesting native histograms has
|
||||||
|
to be enabled via a [feature flag](../feature_flags/#native-histograms). Once
|
||||||
|
native histograms have been ingested, they can be queried (even after the
|
||||||
|
feature flag has been disabled again). However, the operator support for native
|
||||||
|
histograms is still very limited.
|
||||||
|
|
||||||
|
Logical/set binary operators work as expected even if histogram samples are
|
||||||
|
involved. They only check for the existence of a vector element and don't
|
||||||
|
change their behavior depending on the sample type of an element (float or
|
||||||
|
histogram).
|
||||||
|
|
||||||
|
The binary `+` operator between two native histograms and the `sum` aggregation
|
||||||
|
operator to aggregate native histograms are fully supported. Even if the
|
||||||
|
histograms involved have different bucket layouts, the buckets are
|
||||||
|
automatically converted appropriately so that the operation can be
|
||||||
|
performed. (With the currently supported bucket schemas, that's always
|
||||||
|
possible.) If either operator has to sum up a mix of histogram samples and
|
||||||
|
float samples, the corresponding vector element is removed from the output
|
||||||
|
vector entirely.
|
||||||
|
|
||||||
|
All other operators do not behave in a meaningful way. They either treat the
|
||||||
|
histogram sample as if it were a float sample of value 0, or (in case of
|
||||||
|
arithmetic operations between a scalar and a vector) they leave the histogram
|
||||||
|
sample unchanged. This behavior will change to a meaningful one before native
|
||||||
|
histograms are a stable feature.
|
||||||
|
|
|
@ -57,11 +57,6 @@ scrape_configs:
|
||||||
regex: default;kubernetes;https
|
regex: default;kubernetes;https
|
||||||
|
|
||||||
# Scrape config for nodes (kubelet).
|
# Scrape config for nodes (kubelet).
|
||||||
#
|
|
||||||
# Rather than connecting directly to the node, the scrape is proxied though the
|
|
||||||
# Kubernetes apiserver. This means it will work if Prometheus is running out of
|
|
||||||
# cluster, or can't connect to nodes for some other reason (e.g. because of
|
|
||||||
# firewalling).
|
|
||||||
- job_name: "kubernetes-nodes"
|
- job_name: "kubernetes-nodes"
|
||||||
|
|
||||||
# Default to scraping over https. If required, just disable this or change to
|
# Default to scraping over https. If required, just disable this or change to
|
||||||
|
|
16
documentation/examples/prometheus-ovhcloud.yml
Normal file
16
documentation/examples/prometheus-ovhcloud.yml
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
# An example scrape configuration for running Prometheus with Ovhcloud.
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: 'ovhcloud'
|
||||||
|
ovhcloud_sd_configs:
|
||||||
|
- service: vps
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: XXX
|
||||||
|
application_secret: XXX
|
||||||
|
consumer_key: XXX
|
||||||
|
refresh_interval: 1m
|
||||||
|
- service: dedicated_server
|
||||||
|
endpoint: ovh-eu
|
||||||
|
application_key: XXX
|
||||||
|
application_secret: XXX
|
||||||
|
consumer_key: XXX
|
||||||
|
refresh_interval: 1m
|
|
@ -49,6 +49,11 @@ func main() {
|
||||||
}
|
}
|
||||||
fmt.Printf("\tExemplar: %+v %f %d\n", m, e.Value, e.Timestamp)
|
fmt.Printf("\tExemplar: %+v %f %d\n", m, e.Value, e.Timestamp)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, hp := range ts.Histograms {
|
||||||
|
h := remote.HistogramProtoToHistogram(hp)
|
||||||
|
fmt.Printf("\tHistogram: %s\n", h.String())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|
|
@ -7,9 +7,9 @@ require (
|
||||||
github.com/gogo/protobuf v1.3.2
|
github.com/gogo/protobuf v1.3.2
|
||||||
github.com/golang/snappy v0.0.4
|
github.com/golang/snappy v0.0.4
|
||||||
github.com/influxdata/influxdb v1.10.0
|
github.com/influxdata/influxdb v1.10.0
|
||||||
github.com/prometheus/client_golang v1.13.0
|
github.com/prometheus/client_golang v1.13.1
|
||||||
github.com/prometheus/common v0.37.0
|
github.com/prometheus/common v0.37.0
|
||||||
github.com/stretchr/testify v1.8.0
|
github.com/stretchr/testify v1.8.1
|
||||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -29,8 +29,7 @@ require (
|
||||||
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2 // indirect
|
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2 // indirect
|
||||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||||
github.com/jpillora/backoff v1.0.0 // indirect
|
github.com/jpillora/backoff v1.0.0 // indirect
|
||||||
github.com/kr/pretty v0.2.1 // indirect
|
github.com/kr/pretty v0.3.0 // indirect
|
||||||
github.com/kr/text v0.2.0 // indirect
|
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
|
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
|
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
|
@ -55,7 +54,7 @@ require (
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/prometheus/prometheus v0.38.0
|
github.com/prometheus/prometheus v0.37.1-0.20221011120840-430bdc9dd099
|
||||||
golang.org/x/oauth2 v0.0.0-20220808172628-8227340efae7 // indirect
|
golang.org/x/oauth2 v0.0.0-20220808172628-8227340efae7 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -19,7 +19,7 @@ github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRF
|
||||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
||||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
|
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
|
||||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
|
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
|
||||||
github.com/armon/go-metrics v0.3.10 h1:FR+drcQStOe+32sYyJYyZ7FIdgoGGBnwLl+flodp8Uo=
|
github.com/armon/go-metrics v0.3.3 h1:a9F4rlj7EWWrbj7BYw8J8+x+ZZkJeqzNyRk8hdPF+ro=
|
||||||
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||||
github.com/aws/aws-sdk-go v1.44.72 h1:i7J5XT7pjBjtl1OrdIhiQHzsG89wkZCcM1HhyK++3DI=
|
github.com/aws/aws-sdk-go v1.44.72 h1:i7J5XT7pjBjtl1OrdIhiQHzsG89wkZCcM1HhyK++3DI=
|
||||||
github.com/aws/aws-sdk-go v1.44.72/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
github.com/aws/aws-sdk-go v1.44.72/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||||
|
@ -103,19 +103,19 @@ github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
||||||
github.com/gophercloud/gophercloud v0.25.0 h1:C3Oae7y0fUVQGSsBrb3zliAjdX+riCSEh4lNMejFNI4=
|
github.com/gophercloud/gophercloud v0.25.0 h1:C3Oae7y0fUVQGSsBrb3zliAjdX+riCSEh4lNMejFNI4=
|
||||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
|
||||||
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2 h1:uirlL/j72L93RhV4+mkWhjv0cov2I0MIgPOG9rMDr1k=
|
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2 h1:uirlL/j72L93RhV4+mkWhjv0cov2I0MIgPOG9rMDr1k=
|
||||||
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
|
github.com/grafana/regexp v0.0.0-20220304095617-2e8d9baf4ac2/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
|
||||||
github.com/hashicorp/consul/api v1.14.0 h1:Y64GIJ8hYTu+tuGekwO4G4ardXoiCivX9wv1iP/kihk=
|
github.com/hashicorp/consul/api v1.13.1 h1:r5cPdVFUy+pFF7nt+0ArLD9hm+E39OewJkvNdjKXcL4=
|
||||||
github.com/hashicorp/cronexpr v1.1.1 h1:NJZDd87hGXjoZBdvyCF9mX4DCq5Wy7+A/w+A7q0wn6c=
|
github.com/hashicorp/cronexpr v1.1.1 h1:NJZDd87hGXjoZBdvyCF9mX4DCq5Wy7+A/w+A7q0wn6c=
|
||||||
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
|
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
|
||||||
github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQLReU=
|
github.com/hashicorp/go-hclog v0.12.2 h1:F1fdYblUEsxKiailtkhCCG2g4bipEgaHiDc8vffNpD4=
|
||||||
github.com/hashicorp/go-immutable-radix v1.3.0 h1:8exGP7ego3OmkfksihtSouGMZ+hQrhxx+FVELeXpVPE=
|
github.com/hashicorp/go-immutable-radix v1.2.0 h1:l6UW37iCXwZkZoAbEYnptSHVE/cQ5bOTPYG5W3vf9+8=
|
||||||
github.com/hashicorp/go-retryablehttp v0.7.1 h1:sUiuQAnLlbvmExtFQs72iFW/HXeUn8Z1aJLQ4LJJbTQ=
|
github.com/hashicorp/go-retryablehttp v0.7.1 h1:sUiuQAnLlbvmExtFQs72iFW/HXeUn8Z1aJLQ4LJJbTQ=
|
||||||
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
|
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
|
||||||
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
|
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
|
||||||
github.com/hashicorp/nomad/api v0.0.0-20220809212729-939d643fec2c h1:lV5A4cLQr1Bh1xGSSQ2R0fDRK4GZnfXxYia4Q7aaTXc=
|
github.com/hashicorp/nomad/api v0.0.0-20220629141207-c2428e1673ec h1:jAF71e0KoaY2LJlRsRxxGz6MNQOG5gTBIc+rklxfNO0=
|
||||||
github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
|
github.com/hashicorp/serf v0.9.6 h1:uuEX1kLR6aoda1TBttmJQKDLZE1Ob7KN0NPdE7EtCDc=
|
||||||
github.com/hetznercloud/hcloud-go v1.35.2 h1:eEDtmDiI2plZ2UQmj4YpiYse5XbtpXOUBpAdIOLxzgE=
|
github.com/hetznercloud/hcloud-go v1.35.2 h1:eEDtmDiI2plZ2UQmj4YpiYse5XbtpXOUBpAdIOLxzgE=
|
||||||
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
|
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
|
||||||
github.com/influxdata/influxdb v1.10.0 h1:8xDpt8KO3lzrzf/ss+l8r42AGUZvoITu5824berK7SE=
|
github.com/influxdata/influxdb v1.10.0 h1:8xDpt8KO3lzrzf/ss+l8r42AGUZvoITu5824berK7SE=
|
||||||
|
@ -142,8 +142,8 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxv
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
|
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
|
||||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
|
@ -157,7 +157,7 @@ github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182aff
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
||||||
github.com/miekg/dns v1.1.50 h1:DQUfb9uc6smULcREF09Uc+/Gd46YWqJd5DbpPE9xkcA=
|
github.com/miekg/dns v1.1.50 h1:DQUfb9uc6smULcREF09Uc+/Gd46YWqJd5DbpPE9xkcA=
|
||||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGgCcyj8cs=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
|
@ -183,8 +183,8 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn
|
||||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||||
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||||
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
|
github.com/prometheus/client_golang v1.13.1 h1:3gMjIY2+/hzmqhtUC/aQNYldJA6DtH3CgQvwS+02K1c=
|
||||||
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
|
github.com/prometheus/client_golang v1.13.1/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
|
||||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
||||||
|
@ -205,8 +205,10 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
|
||||||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||||
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
|
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
|
||||||
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
|
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
|
||||||
github.com/prometheus/prometheus v0.38.0 h1:YSiJ5gDZmXnOntPRyHn1wb/6I1Frasj9dw57XowIqeA=
|
github.com/prometheus/prometheus v0.37.1-0.20221011120840-430bdc9dd099 h1:ISpgxhFfSrMztQTw0Za6xDDC3Fwe4kciR8Pwv3Sz9yE=
|
||||||
github.com/prometheus/prometheus v0.38.0/go.mod h1:2zHO5FtRhM+iu995gwKIb99EXxjeZEuXpKUTIRq4YI0=
|
github.com/prometheus/prometheus v0.37.1-0.20221011120840-430bdc9dd099/go.mod h1:dfkjkdCd3FhLE0BiBIKwwwkZiDQnTnDThE1Zex1UwbA=
|
||||||
|
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
||||||
|
github.com/rogpeppe/go-internal v1.8.1 h1:geMPLpDpQOgVyCg5z5GoRwLHepNdb71NXb67XFkP+Eg=
|
||||||
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9 h1:0roa6gXKgyta64uqh52AQG3wzZXH21unn+ltzQSXML0=
|
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9 h1:0roa6gXKgyta64uqh52AQG3wzZXH21unn+ltzQSXML0=
|
||||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||||
|
@ -216,13 +218,15 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
|
|
||||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
|
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||||
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/vultr/govultr/v2 v2.17.2 h1:gej/rwr91Puc/tgh+j33p/BLR16UrIPnSr+AIwYWZQs=
|
github.com/vultr/govultr/v2 v2.17.2 h1:gej/rwr91Puc/tgh+j33p/BLR16UrIPnSr+AIwYWZQs=
|
||||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
|
@ -320,7 +324,7 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
|
||||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||||
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
|
golang.org/x/tools v0.1.13-0.20220908144252-ce397412b6a4 h1:glzimF7qHZuKVEiMbE7UqBu44MyTjt5u6j3Jz+rfMRM=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
@ -329,7 +333,7 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
|
||||||
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||||
google.golang.org/genproto v0.0.0-20220808204814-fd01256a5276 h1:7PEE9xCtufpGJzrqweakEEnTh7YFELmnKm/ee+5jmfQ=
|
google.golang.org/genproto v0.0.0-20220802133213-ce4fa296bf78 h1:QntLWYqZeuBtJkth3m/6DLznnI0AHJr+AgJXvVh/izw=
|
||||||
google.golang.org/grpc v1.48.0 h1:rQOsyJ/8+ufEDJd/Gdsz7HG220Mh9HAhFHRGnIjda0w=
|
google.golang.org/grpc v1.48.0 h1:rQOsyJ/8+ufEDJd/Gdsz7HG220Mh9HAhFHRGnIjda0w=
|
||||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||||
|
@ -347,8 +351,9 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
|
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||||
gopkg.in/ini.v1 v1.66.6 h1:LATuAqN/shcYAOkv3wl2L4rkaKqkcgTBQjOyYDvcPKI=
|
gopkg.in/ini.v1 v1.66.4 h1:SsAcf+mM7mRZo2nJNGt8mZCjG8ZRaNGMURJw7BsIST4=
|
||||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
|
|
|
@ -27,7 +27,7 @@ local template = grafana.template;
|
||||||
instance: { alias: 'Instance' },
|
instance: { alias: 'Instance' },
|
||||||
version: { alias: 'Version' },
|
version: { alias: 'Version' },
|
||||||
'Value #A': { alias: 'Count', type: 'hidden' },
|
'Value #A': { alias: 'Count', type: 'hidden' },
|
||||||
'Value #B': { alias: 'Uptime' },
|
'Value #B': { alias: 'Uptime', type: 'number', unit: 's' },
|
||||||
})
|
})
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
94
go.mod
94
go.mod
|
@ -7,81 +7,87 @@ require (
|
||||||
github.com/Azure/go-autorest/autorest v0.11.28
|
github.com/Azure/go-autorest/autorest v0.11.28
|
||||||
github.com/Azure/go-autorest/autorest/adal v0.9.21
|
github.com/Azure/go-autorest/autorest/adal v0.9.21
|
||||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137
|
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137
|
||||||
github.com/aws/aws-sdk-go v1.44.109
|
github.com/aws/aws-sdk-go v1.44.131
|
||||||
github.com/cespare/xxhash/v2 v2.1.2
|
github.com/cespare/xxhash/v2 v2.1.2
|
||||||
github.com/dennwc/varint v1.0.0
|
github.com/dennwc/varint v1.0.0
|
||||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245
|
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245
|
||||||
github.com/digitalocean/godo v1.84.1
|
github.com/digitalocean/godo v1.89.0
|
||||||
github.com/docker/docker v20.10.18+incompatible
|
github.com/docker/docker v20.10.21+incompatible
|
||||||
github.com/edsrzf/mmap-go v1.1.0
|
github.com/edsrzf/mmap-go v1.1.0
|
||||||
github.com/envoyproxy/go-control-plane v0.10.3
|
github.com/envoyproxy/go-control-plane v0.10.3
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.6.8
|
github.com/envoyproxy/protoc-gen-validate v0.8.0
|
||||||
github.com/fsnotify/fsnotify v1.5.4
|
github.com/fsnotify/fsnotify v1.6.0
|
||||||
github.com/go-kit/log v0.2.1
|
github.com/go-kit/log v0.2.1
|
||||||
github.com/go-logfmt/logfmt v0.5.1
|
github.com/go-logfmt/logfmt v0.5.1
|
||||||
github.com/go-openapi/strfmt v0.21.3
|
github.com/go-openapi/strfmt v0.21.3
|
||||||
github.com/go-zookeeper/zk v1.0.3
|
github.com/go-zookeeper/zk v1.0.3
|
||||||
github.com/gogo/protobuf v1.3.2
|
github.com/gogo/protobuf v1.3.2
|
||||||
github.com/golang/snappy v0.0.4
|
github.com/golang/snappy v0.0.4
|
||||||
github.com/google/pprof v0.0.0-20220829040838-70bd9ae97f40
|
github.com/google/pprof v0.0.0-20221102093814-76f304f74e5e
|
||||||
github.com/gophercloud/gophercloud v1.0.0
|
github.com/gophercloud/gophercloud v1.0.0
|
||||||
github.com/grafana/regexp v0.0.0-20221005093135-b4c2bcb0a4b6
|
github.com/grafana/regexp v0.0.0-20221005093135-b4c2bcb0a4b6
|
||||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0
|
github.com/grpc-ecosystem/grpc-gateway v1.16.0
|
||||||
github.com/hashicorp/consul/api v1.15.2
|
github.com/hashicorp/consul/api v1.15.3
|
||||||
github.com/hashicorp/nomad/api v0.0.0-20220921012004-ddeeb1040edf
|
github.com/hashicorp/nomad/api v0.0.0-20221102143410-8a95f1239005
|
||||||
github.com/hetznercloud/hcloud-go v1.35.3
|
github.com/hetznercloud/hcloud-go v1.35.3
|
||||||
github.com/ionos-cloud/sdk-go/v6 v6.1.3
|
github.com/ionos-cloud/sdk-go/v6 v6.1.3
|
||||||
github.com/json-iterator/go v1.1.12
|
github.com/json-iterator/go v1.1.12
|
||||||
github.com/kolo/xmlrpc v0.0.0-20220919000247-3377102c83bd
|
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b
|
||||||
github.com/linode/linodego v1.9.3
|
github.com/linode/linodego v1.9.3
|
||||||
github.com/miekg/dns v1.1.50
|
github.com/miekg/dns v1.1.50
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f
|
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f
|
||||||
github.com/oklog/run v1.1.0
|
github.com/oklog/run v1.1.0
|
||||||
github.com/oklog/ulid v1.3.1
|
github.com/oklog/ulid v1.3.1
|
||||||
|
github.com/ovh/go-ovh v1.3.0
|
||||||
github.com/pkg/errors v0.9.1
|
github.com/pkg/errors v0.9.1
|
||||||
github.com/prometheus/alertmanager v0.24.0
|
github.com/prometheus/alertmanager v0.24.0
|
||||||
github.com/prometheus/client_golang v1.13.0
|
github.com/prometheus/client_golang v1.13.1
|
||||||
github.com/prometheus/client_model v0.2.0
|
github.com/prometheus/client_model v0.3.0
|
||||||
github.com/prometheus/common v0.37.0
|
github.com/prometheus/common v0.37.0
|
||||||
github.com/prometheus/common/assets v0.2.0
|
github.com/prometheus/common/assets v0.2.0
|
||||||
github.com/prometheus/common/sigv4 v0.1.0
|
github.com/prometheus/common/sigv4 v0.1.0
|
||||||
github.com/prometheus/exporter-toolkit v0.7.1
|
github.com/prometheus/exporter-toolkit v0.8.1
|
||||||
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9
|
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9
|
||||||
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
|
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
|
||||||
github.com/stretchr/testify v1.8.0
|
github.com/stretchr/testify v1.8.1
|
||||||
github.com/vultr/govultr/v2 v2.17.2
|
github.com/vultr/govultr/v2 v2.17.2
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.0
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4
|
||||||
go.opentelemetry.io/otel v1.10.0
|
go.opentelemetry.io/otel v1.11.1
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.10.0
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.11.1
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.10.0
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.11.1
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.10.0
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.11.1
|
||||||
go.opentelemetry.io/otel/sdk v1.10.0
|
go.opentelemetry.io/otel/sdk v1.11.1
|
||||||
go.opentelemetry.io/otel/trace v1.10.0
|
go.opentelemetry.io/otel/trace v1.11.1
|
||||||
go.uber.org/atomic v1.10.0
|
go.uber.org/atomic v1.10.0
|
||||||
go.uber.org/automaxprocs v1.5.1
|
go.uber.org/automaxprocs v1.5.1
|
||||||
go.uber.org/goleak v1.2.0
|
go.uber.org/goleak v1.2.0
|
||||||
golang.org/x/net v0.0.0-20220920203100-d0c6ba3f52d9
|
golang.org/x/net v0.1.0
|
||||||
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1
|
golang.org/x/oauth2 v0.1.0
|
||||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804
|
golang.org/x/sync v0.1.0
|
||||||
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8
|
golang.org/x/sys v0.1.0
|
||||||
golang.org/x/time v0.0.0-20220920022843-2ce7c2934d45
|
golang.org/x/time v0.1.0
|
||||||
golang.org/x/tools v0.1.12
|
golang.org/x/tools v0.2.0
|
||||||
google.golang.org/api v0.96.0
|
google.golang.org/api v0.102.0
|
||||||
google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006
|
google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c
|
||||||
google.golang.org/grpc v1.49.0
|
google.golang.org/grpc v1.50.1
|
||||||
google.golang.org/protobuf v1.28.1
|
google.golang.org/protobuf v1.28.1
|
||||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
||||||
gopkg.in/yaml.v2 v2.4.0
|
gopkg.in/yaml.v2 v2.4.0
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
k8s.io/api v0.25.2
|
k8s.io/api v0.25.3
|
||||||
k8s.io/apimachinery v0.25.2
|
k8s.io/apimachinery v0.25.3
|
||||||
k8s.io/client-go v0.25.2
|
k8s.io/client-go v0.25.3
|
||||||
k8s.io/klog v1.0.0
|
k8s.io/klog v1.0.0
|
||||||
k8s.io/klog/v2 v2.80.0
|
k8s.io/klog/v2 v2.80.0
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
cloud.google.com/go/compute v1.7.0 // indirect
|
cloud.google.com/go/compute/metadata v0.2.1 // indirect
|
||||||
|
github.com/coreos/go-systemd/v22 v22.4.0 // indirect
|
||||||
|
)
|
||||||
|
|
||||||
|
require (
|
||||||
|
cloud.google.com/go/compute v1.12.1 // indirect
|
||||||
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
|
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
|
||||||
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
|
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
|
||||||
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
|
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
|
||||||
|
@ -100,7 +106,7 @@ require (
|
||||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
github.com/docker/distribution v2.7.1+incompatible // indirect
|
github.com/docker/distribution v2.7.1+incompatible // indirect
|
||||||
github.com/docker/go-connections v0.4.0 // indirect
|
github.com/docker/go-connections v0.4.0 // indirect
|
||||||
github.com/docker/go-units v0.4.0 // indirect
|
github.com/docker/go-units v0.5.0 // indirect
|
||||||
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
|
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
|
||||||
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
|
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
|
||||||
github.com/fatih/color v1.13.0 // indirect
|
github.com/fatih/color v1.13.0 // indirect
|
||||||
|
@ -123,12 +129,12 @@ require (
|
||||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||||
github.com/golang/protobuf v1.5.2 // indirect
|
github.com/golang/protobuf v1.5.2 // indirect
|
||||||
github.com/google/gnostic v0.5.7-v3refs // indirect
|
github.com/google/gnostic v0.5.7-v3refs // indirect
|
||||||
github.com/google/go-cmp v0.5.8 // indirect
|
github.com/google/go-cmp v0.5.9 // indirect
|
||||||
github.com/google/go-querystring v1.1.0 // indirect
|
github.com/google/go-querystring v1.1.0 // indirect
|
||||||
github.com/google/gofuzz v1.2.0 // indirect
|
github.com/google/gofuzz v1.2.0 // indirect
|
||||||
github.com/google/uuid v1.3.0 // indirect
|
github.com/google/uuid v1.3.0 // indirect
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.1.0 // indirect
|
github.com/googleapis/enterprise-certificate-proxy v0.2.0 // indirect
|
||||||
github.com/googleapis/gax-go/v2 v2.4.0 // indirect
|
github.com/googleapis/gax-go/v2 v2.6.0 // indirect
|
||||||
github.com/gorilla/websocket v1.5.0 // indirect
|
github.com/gorilla/websocket v1.5.0 // indirect
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1 // indirect
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1 // indirect
|
||||||
github.com/hashicorp/cronexpr v1.1.1 // indirect
|
github.com/hashicorp/cronexpr v1.1.1 // indirect
|
||||||
|
@ -163,14 +169,14 @@ require (
|
||||||
github.com/spf13/pflag v1.0.5 // indirect
|
github.com/spf13/pflag v1.0.5 // indirect
|
||||||
go.mongodb.org/mongo-driver v1.10.2 // indirect
|
go.mongodb.org/mongo-driver v1.10.2 // indirect
|
||||||
go.opencensus.io v0.23.0 // indirect
|
go.opencensus.io v0.23.0 // indirect
|
||||||
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.10.0 // indirect
|
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.11.1 // indirect
|
||||||
go.opentelemetry.io/otel/metric v0.32.0 // indirect
|
go.opentelemetry.io/otel/metric v0.33.0 // indirect
|
||||||
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
|
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
|
||||||
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa // indirect
|
golang.org/x/crypto v0.1.0 // indirect
|
||||||
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e
|
golang.org/x/exp v0.0.0-20221031165847-c99f073a8326
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
|
golang.org/x/mod v0.6.0 // indirect
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
|
golang.org/x/term v0.1.0 // indirect
|
||||||
golang.org/x/text v0.3.7 // indirect
|
golang.org/x/text v0.4.0 // indirect
|
||||||
google.golang.org/appengine v1.6.7 // indirect
|
google.golang.org/appengine v1.6.7 // indirect
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||||
|
|
347
go.sum
347
go.sum
|
@ -16,35 +16,20 @@ cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHOb
|
||||||
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
|
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
|
||||||
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
|
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
|
||||||
cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY=
|
cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY=
|
||||||
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
|
cloud.google.com/go v0.105.0 h1:DNtEKRBAAzeS4KyIory52wWHuClNaXJ5x1F7xa4q+5Y=
|
||||||
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
|
|
||||||
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
|
|
||||||
cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
|
|
||||||
cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
|
|
||||||
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
|
|
||||||
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
|
|
||||||
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
|
|
||||||
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
|
|
||||||
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
|
|
||||||
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
|
|
||||||
cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
|
|
||||||
cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
|
|
||||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||||
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
|
cloud.google.com/go/compute v1.12.1 h1:gKVJMEyqV5c/UnpzjjQbo3Rjvvqpr9B1DFSbJC4OXr0=
|
||||||
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
|
cloud.google.com/go/compute v1.12.1/go.mod h1:e8yNOBcBONZU1vJKCvCoDw/4JQsA0dpM4x/6PIIOocU=
|
||||||
cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
|
cloud.google.com/go/compute/metadata v0.2.1 h1:efOwf5ymceDhK6PKMnnrTHP4pppY5L22mle96M1yP48=
|
||||||
cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
|
cloud.google.com/go/compute/metadata v0.2.1/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxBkeanZ9wwa75XHJgOM=
|
||||||
cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU=
|
|
||||||
cloud.google.com/go/compute v1.7.0 h1:v/k9Eueb8aAJ0vZuxKMrgm6kPhCLZU9HxFU+AFDs9Uk=
|
|
||||||
cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
|
|
||||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||||
cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY=
|
cloud.google.com/go/longrunning v0.1.1 h1:y50CXG4j0+qvEukslYFBCrzaXX0qpFbBzc3PchSu/LE=
|
||||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||||
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
||||||
|
@ -55,7 +40,6 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
|
||||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||||
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
|
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
|
||||||
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
|
|
||||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||||
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
|
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
|
||||||
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||||
|
@ -121,8 +105,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
|
||||||
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||||
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||||
github.com/aws/aws-sdk-go v1.43.11/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
github.com/aws/aws-sdk-go v1.43.11/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||||
github.com/aws/aws-sdk-go v1.44.109 h1:+Na5JPeS0kiEHoBp5Umcuuf+IDqXqD0lXnM920E31YI=
|
github.com/aws/aws-sdk-go v1.44.131 h1:kd61x79ax0vyiC/SZ9X1hKh8E0pt1BUOOcVBJEFhxkg=
|
||||||
github.com/aws/aws-sdk-go v1.44.109/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
github.com/aws/aws-sdk-go v1.44.131/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||||
|
@ -154,7 +138,6 @@ github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XP
|
||||||
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||||
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||||
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||||
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
|
||||||
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||||
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc h1:PYXxkRUBGUMa5xgMVMDl62vEklZvKpVaxQeN9ie7Hfk=
|
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc h1:PYXxkRUBGUMa5xgMVMDl62vEklZvKpVaxQeN9ie7Hfk=
|
||||||
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||||
|
@ -162,6 +145,8 @@ github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:z
|
||||||
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||||
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||||
|
github.com/coreos/go-systemd/v22 v22.4.0 h1:y9YHcjnjynCd/DVbg5j9L/33jQM3MxJlbj/zWskzfGU=
|
||||||
|
github.com/coreos/go-systemd/v22 v22.4.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||||
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||||
|
@ -175,17 +160,18 @@ github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgz
|
||||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245 h1:9cOfvEwjQxdwKuNDTQSaMKNRvwKwgZG+U4HrjeRKHso=
|
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245 h1:9cOfvEwjQxdwKuNDTQSaMKNRvwKwgZG+U4HrjeRKHso=
|
||||||
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||||
github.com/digitalocean/godo v1.84.1 h1:VgPsuxhrO9pUygvij6qOhqXfAkxAsDZYRpmjSDMEaHo=
|
github.com/digitalocean/godo v1.89.0 h1:UL3Ii4qfk86m4qEKg2iSwop0puvgOCKvwzXvwArU05E=
|
||||||
github.com/digitalocean/godo v1.84.1/go.mod h1:BPCqvwbjbGqxuUnIKB4EvS/AX7IDnNmt5fwvIkWo+ew=
|
github.com/digitalocean/godo v1.89.0/go.mod h1:NRpFznZFvhHjBoqZAaOD3khVzsJ3EibzKqFL4R60dmA=
|
||||||
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
|
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
|
||||||
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
|
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
|
||||||
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||||
github.com/docker/docker v20.10.18+incompatible h1:SN84VYXTBNGn92T/QwIRPlum9zfemfitN7pbsp26WSc=
|
github.com/docker/docker v20.10.21+incompatible h1:UTLdBmHk3bEY+w8qeO5KttOhy6OmXWsl/FEet9Uswog=
|
||||||
github.com/docker/docker v20.10.18+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
github.com/docker/docker v20.10.21+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
||||||
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
||||||
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||||
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
|
|
||||||
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||||
|
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||||
|
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||||
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
|
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
|
||||||
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||||
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
||||||
|
@ -202,16 +188,14 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
|
||||||
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
||||||
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
|
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
|
||||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
||||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
|
||||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
|
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
|
||||||
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
|
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
|
||||||
github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
|
|
||||||
github.com/envoyproxy/go-control-plane v0.10.3 h1:xdCVXxEe0Y3FQith+0cj2irwZudqGYvecuLB1HtdexY=
|
github.com/envoyproxy/go-control-plane v0.10.3 h1:xdCVXxEe0Y3FQith+0cj2irwZudqGYvecuLB1HtdexY=
|
||||||
github.com/envoyproxy/go-control-plane v0.10.3/go.mod h1:fJJn/j26vwOu972OllsvAgJJM//w9BV6Fxbg2LuVd34=
|
github.com/envoyproxy/go-control-plane v0.10.3/go.mod h1:fJJn/j26vwOu972OllsvAgJJM//w9BV6Fxbg2LuVd34=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.6.7/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
|
github.com/envoyproxy/protoc-gen-validate v0.6.7/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.6.8 h1:B2cR/FAaiMtYDHv5BQpaqtkjGuWQIgr2KQZtHQ7f6i8=
|
github.com/envoyproxy/protoc-gen-validate v0.8.0 h1:eZxAlfY5c/HTcV7aN9EUL3Ej/zY/WDmawwClR16nfDA=
|
||||||
github.com/envoyproxy/protoc-gen-validate v0.6.8/go.mod h1:0ZMblUx0cxNoWRswEEXoj9kHBmqX8pxGweMiyIAfR6A=
|
github.com/envoyproxy/protoc-gen-validate v0.8.0/go.mod h1:z+FSjkCuAJYqUS2daO/NBFgbCao8JDHcYcpnFfD00cI=
|
||||||
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
|
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
|
||||||
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||||
|
@ -224,8 +208,8 @@ github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSw
|
||||||
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
|
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
|
||||||
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
|
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
|
||||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
|
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
|
||||||
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
|
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
|
||||||
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
||||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||||
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
||||||
|
@ -313,6 +297,7 @@ github.com/gobuffalo/packr/v2 v2.0.9/go.mod h1:emmyGweYTm6Kdper+iywB6YK5YzuKchGt
|
||||||
github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0=
|
github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0=
|
||||||
github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
|
github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
|
||||||
github.com/goccy/go-yaml v1.9.5/go.mod h1:U/jl18uSupI5rdI2jmuCswEA2htH9eXfferR3KfscvA=
|
github.com/goccy/go-yaml v1.9.5/go.mod h1:U/jl18uSupI5rdI2jmuCswEA2htH9eXfferR3KfscvA=
|
||||||
|
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
github.com/gofrs/uuid v4.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
github.com/gofrs/uuid v4.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
||||||
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
|
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
|
||||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
|
@ -339,8 +324,6 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
|
||||||
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||||
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||||
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
|
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
|
||||||
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
|
|
||||||
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
|
|
||||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
|
@ -356,12 +339,10 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
|
||||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||||
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||||
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
|
|
||||||
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
||||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||||
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||||
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
|
||||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||||
|
@ -382,8 +363,8 @@ github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
|
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
|
||||||
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
|
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||||
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||||
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
|
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
|
||||||
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
|
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
|
@ -392,7 +373,6 @@ github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
|
||||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||||
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||||
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||||
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
|
|
||||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||||
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||||
|
@ -403,31 +383,20 @@ github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hf
|
||||||
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||||
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||||
github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||||
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20221102093814-76f304f74e5e h1:F1LLQqQ8WoIbyoxLUY+JUZe1kuHdxThM6CPUATzE6Io=
|
||||||
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
github.com/google/pprof v0.0.0-20221102093814-76f304f74e5e/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
|
||||||
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
|
||||||
github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
|
||||||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
|
||||||
github.com/google/pprof v0.0.0-20220829040838-70bd9ae97f40 h1:ykKxL12NZd3JmWZnyqarJGsF73M9Xhtrik/FEtEeFRE=
|
|
||||||
github.com/google/pprof v0.0.0-20220829040838-70bd9ae97f40/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
|
|
||||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||||
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||||
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
|
github.com/googleapis/enterprise-certificate-proxy v0.2.0 h1:y8Yozv7SZtlU//QXbezB6QkpuE6jMD2/gfzk4AftXjs=
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.1.0 h1:zO8WHNx/MYiAKJ3d5spxZXZE6KHmIQGQcAzwUzV7qQw=
|
github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
|
|
||||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||||
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
|
github.com/googleapis/gax-go/v2 v2.6.0 h1:SXk3ABtQYDT/OH8jAyvEOQ58mgawq5C4o/4/89qN2ZU=
|
||||||
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
|
github.com/googleapis/gax-go/v2 v2.6.0/go.mod h1:1mjbznJAPHFpesgE5ucqfYEscaz5kMdcIDwU/6+DDoY=
|
||||||
github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM=
|
|
||||||
github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM=
|
|
||||||
github.com/googleapis/gax-go/v2 v2.4.0 h1:dS9eYAjhrE2RjmzYw2XAPvcXfmcQLtFEQWn0CR82awk=
|
|
||||||
github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c=
|
|
||||||
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
|
|
||||||
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
||||||
github.com/gophercloud/gophercloud v1.0.0 h1:9nTGx0jizmHxDobe4mck89FyQHVyA3CaXLIUSGJjP9k=
|
github.com/gophercloud/gophercloud v1.0.0 h1:9nTGx0jizmHxDobe4mck89FyQHVyA3CaXLIUSGJjP9k=
|
||||||
github.com/gophercloud/gophercloud v1.0.0/go.mod h1:Q8fZtyi5zZxPS/j9aj3sSxtvj41AdQMDwyo1myduD5c=
|
github.com/gophercloud/gophercloud v1.0.0/go.mod h1:Q8fZtyi5zZxPS/j9aj3sSxtvj41AdQMDwyo1myduD5c=
|
||||||
|
@ -449,8 +418,8 @@ github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0/go.mod h1:hgWBS7lorOAVIJEQMi4Zs
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1 h1:/sDbPb60SusIXjiJGYLUoS/rAQurQmvGWmwn2bBPM9c=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1 h1:/sDbPb60SusIXjiJGYLUoS/rAQurQmvGWmwn2bBPM9c=
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1/go.mod h1:G+WkljZi4mflcqVxYSgvt8MNctRQHjEH8ubKtt1Ka3w=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.1/go.mod h1:G+WkljZi4mflcqVxYSgvt8MNctRQHjEH8ubKtt1Ka3w=
|
||||||
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
|
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
|
||||||
github.com/hashicorp/consul/api v1.15.2 h1:3Q/pDqvJ7udgt/60QOOW/p/PeKioQN+ncYzzCdN2av0=
|
github.com/hashicorp/consul/api v1.15.3 h1:WYONYL2rxTXtlekAqblR2SCdJsizMDIj/uXb5wNy9zU=
|
||||||
github.com/hashicorp/consul/api v1.15.2/go.mod h1:v6nvB10borjOuIwNRZYPZiHKrTM/AyrGtd0WVVodKM8=
|
github.com/hashicorp/consul/api v1.15.3/go.mod h1:/g/qgcoBcEXALCNZgRRisyTW0nY86++L0KbeAMXYCeY=
|
||||||
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
|
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
|
||||||
github.com/hashicorp/consul/sdk v0.11.0 h1:HRzj8YSCln2yGgCumN5CL8lYlD3gBurnervJRJAZyC4=
|
github.com/hashicorp/consul/sdk v0.11.0 h1:HRzj8YSCln2yGgCumN5CL8lYlD3gBurnervJRJAZyC4=
|
||||||
github.com/hashicorp/consul/sdk v0.11.0/go.mod h1:yPkX5Q6CsxTFMjQQDJwzeNmUUF5NUGGbrDsv9wTb8cw=
|
github.com/hashicorp/consul/sdk v0.11.0/go.mod h1:yPkX5Q6CsxTFMjQQDJwzeNmUUF5NUGGbrDsv9wTb8cw=
|
||||||
|
@ -503,8 +472,8 @@ github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2p
|
||||||
github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
|
github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
|
||||||
github.com/hashicorp/memberlist v0.3.1 h1:MXgUXLqva1QvpVEDQW1IQLG0wivQAtmFlHRQ+1vWZfM=
|
github.com/hashicorp/memberlist v0.3.1 h1:MXgUXLqva1QvpVEDQW1IQLG0wivQAtmFlHRQ+1vWZfM=
|
||||||
github.com/hashicorp/memberlist v0.3.1/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
|
github.com/hashicorp/memberlist v0.3.1/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
|
||||||
github.com/hashicorp/nomad/api v0.0.0-20220921012004-ddeeb1040edf h1:l/EZ57iRPNs8vd8c9qH0dB4Q+IiZHJouLAgxJ5j25tU=
|
github.com/hashicorp/nomad/api v0.0.0-20221102143410-8a95f1239005 h1:jKwXhVS4F7qk0g8laz+Anz0g/6yaSJ3HqmSAuSNLUcA=
|
||||||
github.com/hashicorp/nomad/api v0.0.0-20220921012004-ddeeb1040edf/go.mod h1:Z0U0rpbh4Qlkgqu3iRDcfJBA+r3FgoeD1BfigmZhfzM=
|
github.com/hashicorp/nomad/api v0.0.0-20221102143410-8a95f1239005/go.mod h1:vgJmrz4Bz9E1cR/uy70oP9udUJKFRkcEYHlHTp4nFwI=
|
||||||
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
|
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
|
||||||
github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
|
github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
|
||||||
github.com/hashicorp/serf v0.9.7/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
|
github.com/hashicorp/serf v0.9.7/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
|
||||||
|
@ -553,8 +522,8 @@ github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvW
|
||||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||||
github.com/kolo/xmlrpc v0.0.0-20220919000247-3377102c83bd h1:b1taQnM42dp3NdiiQwfmM1WyyucHayZSKN5R0PRYWL0=
|
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b h1:udzkj9S/zlT5X367kqJis0QP7YMxobob6zhzq6Yre00=
|
||||||
github.com/kolo/xmlrpc v0.0.0-20220919000247-3377102c83bd/go.mod h1:pcaDhQK0/NJZEvtCO0qQPPropqV0sJOJ6YW7X+9kRwM=
|
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b/go.mod h1:pcaDhQK0/NJZEvtCO0qQPPropqV0sJOJ6YW7X+9kRwM=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
|
@ -563,7 +532,7 @@ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFB
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
|
@ -677,6 +646,8 @@ github.com/openzipkin-contrib/zipkin-go-opentracing v0.4.5/go.mod h1:/wsWhb9smxS
|
||||||
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
|
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
|
||||||
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
|
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
|
||||||
github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
|
github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
|
||||||
|
github.com/ovh/go-ovh v1.3.0 h1:mvZaddk4E4kLcXhzb+cxBsMPYp2pHqiQpWYkInsuZPQ=
|
||||||
|
github.com/ovh/go-ovh v1.3.0/go.mod h1:AxitLZ5HBRPyUd+Zl60Ajaag+rNTdVXWIkzfrVuTXWA=
|
||||||
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
|
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
|
||||||
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
||||||
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
|
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
|
||||||
|
@ -708,15 +679,16 @@ github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3O
|
||||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||||
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||||
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
|
github.com/prometheus/client_golang v1.13.1 h1:3gMjIY2+/hzmqhtUC/aQNYldJA6DtH3CgQvwS+02K1c=
|
||||||
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
|
github.com/prometheus/client_golang v1.13.1/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
|
||||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
|
||||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
|
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
|
||||||
|
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
|
||||||
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||||
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
|
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
|
||||||
|
@ -731,8 +703,9 @@ github.com/prometheus/common/assets v0.2.0 h1:0P5OrzoHrYBOSM1OigWL3mY8ZvV2N4zIE/
|
||||||
github.com/prometheus/common/assets v0.2.0/go.mod h1:D17UVUE12bHbim7HzwUvtqm6gwBEaDQ0F+hIGbFbccI=
|
github.com/prometheus/common/assets v0.2.0/go.mod h1:D17UVUE12bHbim7HzwUvtqm6gwBEaDQ0F+hIGbFbccI=
|
||||||
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
|
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
|
||||||
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
|
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
|
||||||
github.com/prometheus/exporter-toolkit v0.7.1 h1:c6RXaK8xBVercEeUQ4tRNL8UGWzDHfvj9dseo1FcK1Y=
|
|
||||||
github.com/prometheus/exporter-toolkit v0.7.1/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
|
github.com/prometheus/exporter-toolkit v0.7.1/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
|
||||||
|
github.com/prometheus/exporter-toolkit v0.8.1 h1:TpKt8z55q1zF30BYaZKqh+bODY0WtByHDOhDA2M9pEs=
|
||||||
|
github.com/prometheus/exporter-toolkit v0.8.1/go.mod h1:00shzmJL7KxcsabLWcONwpyNEuWhREOnFqZW7vadFS0=
|
||||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||||
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||||
|
@ -748,7 +721,7 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L
|
||||||
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||||
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||||
github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
|
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
|
||||||
github.com/rs/cors v1.8.2/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
|
github.com/rs/cors v1.8.2/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
|
||||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||||
|
@ -758,7 +731,7 @@ github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9 h1:0roa6gXKgyta64uqh52AQG3wzZX
|
||||||
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
|
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
|
||||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
|
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
|
||||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||||
github.com/shoenig/test v0.3.1 h1:dhGZztS6nQuvJ0o0RtUiQHaEO4hhArh/WmWwik3Ols0=
|
github.com/shoenig/test v0.4.3 h1:3+CjrpqCwtL08S0wZQilu9WWR/S2CdsLKhHjbJqPj/I=
|
||||||
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749 h1:bUGsEnyNbVPw06Bs80sCeARAlK8lhwqGyi6UT8ymuGk=
|
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749 h1:bUGsEnyNbVPw06Bs80sCeARAlK8lhwqGyi6UT8ymuGk=
|
||||||
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
|
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
|
||||||
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||||
|
@ -796,8 +769,9 @@ github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3
|
||||||
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
|
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.4.0 h1:M2gUjqZET1qApGOWNSnZ49BAIMX4F/1plDv3+l31EJ4=
|
|
||||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
|
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
|
||||||
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
|
@ -805,8 +779,9 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
|
|
||||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
|
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||||
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
||||||
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
||||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||||
|
@ -848,24 +823,24 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||||
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||||
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
|
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
|
||||||
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.0 h1:qZ3KzA4qPzLBDtQyPk4ydjlg8zvXbNysnFHaVMKJbVo=
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4 h1:aUEBEdCa6iamGzg6fuYxDA8ThxvOG240mAvWDU+XLio=
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.0/go.mod h1:14Oo79mRwusSI02L0EfG3Gp1uF3+1wSL+D4zDysxyqs=
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4/go.mod h1:l2MdsbKTocpPS5nQZscqTR9jd8u96VYZdcpF8Sye7mA=
|
||||||
go.opentelemetry.io/otel v1.10.0 h1:Y7DTJMR6zs1xkS/upamJYk0SxxN4C9AqRd77jmZnyY4=
|
go.opentelemetry.io/otel v1.11.1 h1:4WLLAmcfkmDk2ukNXJyq3/kiz/3UzCaYq6PskJsaou4=
|
||||||
go.opentelemetry.io/otel v1.10.0/go.mod h1:NbvWjCthWHKBEUMpf0/v8ZRZlni86PpGFEMA9pnQSnQ=
|
go.opentelemetry.io/otel v1.11.1/go.mod h1:1nNhXBbWSD0nsL38H6btgnFN2k4i0sNLHNNMZMSbUGE=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.10.0 h1:TaB+1rQhddO1sF71MpZOZAuSPW1klK2M8XxfrBMfK7Y=
|
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.11.1 h1:X2GndnMCsUPh6CiY2a+frAbNsXaPLbB0soHRYhAZ5Ig=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.10.0/go.mod h1:78XhIg8Ht9vR4tbLNUhXsiOnE2HOuSeKAiAcoVQEpOY=
|
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.11.1/go.mod h1:i8vjiSzbiUC7wOQplijSXMYUpNM93DtlS5CbUT+C6oQ=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.10.0 h1:pDDYmo0QadUPal5fwXoY1pmMpFcdyhXOmL5drCrI3vU=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.11.1 h1:MEQNafcNCB0uQIti/oHgU7CZpUMYQ7qigBwMVKycHvc=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.10.0/go.mod h1:Krqnjl22jUJ0HgMzw5eveuCvFDXY4nSYb4F8t5gdrag=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.11.1/go.mod h1:19O5I2U5iys38SsmT2uDJja/300woyzE1KPIQxEUBUc=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.10.0 h1:KtiUEhQmj/Pa874bVYKGNVdq8NPKiacPbaRRtgXi+t4=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.11.1 h1:LYyG/f1W/jzAix16jbksJfMQFpOH/Ma6T639pVPMgfI=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.10.0/go.mod h1:OfUCyyIiDvNXHWpcWgbF+MWvqPZiNa3YDEnivcnYsV0=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.11.1/go.mod h1:QrRRQiY3kzAoYPNLP0W/Ikg0gR6V3LMc+ODSxr7yyvg=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.10.0 h1:S8DedULB3gp93Rh+9Z+7NTEv+6Id/KYS7LDyipZ9iCE=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.11.1 h1:tFl63cpAAcD9TOU6U8kZU7KyXuSRYAZlbx1C61aaB74=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.10.0/go.mod h1:5WV40MLWwvWlGP7Xm8g3pMcg0pKOUY609qxJn8y7LmM=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.11.1/go.mod h1:X620Jww3RajCJXw/unA+8IRTgxkdS7pi+ZwK9b7KUJk=
|
||||||
go.opentelemetry.io/otel/metric v0.32.0 h1:lh5KMDB8xlMM4kwE38vlZJ3rZeiWrjw3As1vclfC01k=
|
go.opentelemetry.io/otel/metric v0.33.0 h1:xQAyl7uGEYvrLAiV/09iTJlp1pZnQ9Wl793qbVvED1E=
|
||||||
go.opentelemetry.io/otel/metric v0.32.0/go.mod h1:PVDNTt297p8ehm949jsIzd+Z2bIZJYQQG/uuHTeWFHY=
|
go.opentelemetry.io/otel/metric v0.33.0/go.mod h1:QlTYc+EnYNq/M2mNk1qDDMRLpqCOj2f/r5c7Fd5FYaI=
|
||||||
go.opentelemetry.io/otel/sdk v1.10.0 h1:jZ6K7sVn04kk/3DNUdJ4mqRlGDiXAVuIG+MMENpTNdY=
|
go.opentelemetry.io/otel/sdk v1.11.1 h1:F7KmQgoHljhUuJyA+9BiU+EkJfyX5nVVF4wyzWZpKxs=
|
||||||
go.opentelemetry.io/otel/sdk v1.10.0/go.mod h1:vO06iKzD5baltJz1zarxMCNHFpUlUiOy4s65ECtn6kE=
|
go.opentelemetry.io/otel/sdk v1.11.1/go.mod h1:/l3FE4SupHJ12TduVjUkZtlfFqDCQJlOlithYrdktys=
|
||||||
go.opentelemetry.io/otel/trace v1.10.0 h1:npQMbR8o7mum8uF95yFbOEJffhs1sbCOfDh8zAJiH5E=
|
go.opentelemetry.io/otel/trace v1.11.1 h1:ofxdnzsNrGBYXbP7t7zpUK281+go5rF7dvdIZXF8gdQ=
|
||||||
go.opentelemetry.io/otel/trace v1.10.0/go.mod h1:Sij3YYczqAdz+EhmGhE6TpTxUO5/F/AzrK+kxfGqySM=
|
go.opentelemetry.io/otel/trace v1.11.1/go.mod h1:f/Q9G7vzk5u91PhbmKbg1Qn0rzH1LJ4vbPHFGkTPtOk=
|
||||||
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
||||||
go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
|
go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
|
||||||
go.opentelemetry.io/proto/otlp v0.19.0 h1:IVN6GR+mhC4s5yfcTbmzHYODqvWAp3ZedA2SJPI1Nnw=
|
go.opentelemetry.io/proto/otlp v0.19.0 h1:IVN6GR+mhC4s5yfcTbmzHYODqvWAp3ZedA2SJPI1Nnw=
|
||||||
|
@ -903,8 +878,9 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y
|
||||||
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
golang.org/x/crypto v0.0.0-20211202192323-5770296d904e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
golang.org/x/crypto v0.0.0-20211202192323-5770296d904e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||||
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa h1:zuSxTR4o9y82ebqCUJYNGJbGPo6sKVl54f/TVDObg1c=
|
|
||||||
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||||
|
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||||
|
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||||
|
@ -915,8 +891,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
|
||||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||||
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
||||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||||
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e h1:+WEEuIdZHnUeJJmEUjyYC2gfUMj69yZXw17EnHg/otA=
|
golang.org/x/exp v0.0.0-20221031165847-c99f073a8326 h1:QfTh0HpN6hlw6D3vu8DAwC8pBIwikq0AI1evdm+FksE=
|
||||||
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e/go.mod h1:Kr81I6Kryrl9sr8s2FK3vxD90NdsKWRuOIl2O4CvYbA=
|
golang.org/x/exp v0.0.0-20221031165847-c99f073a8326/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||||
|
@ -945,8 +921,9 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
|
golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
|
||||||
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
|
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
|
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
|
golang.org/x/mod v0.6.0 h1:b9gGHsz9/HhJ3HF5DHQytPpuwocVTChQJK3AvoLRD5I=
|
||||||
|
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
|
||||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
|
@ -986,13 +963,10 @@ golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwY
|
||||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
|
||||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
|
|
||||||
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
||||||
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
|
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
|
||||||
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
|
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
|
||||||
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
|
||||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
|
@ -1001,16 +975,9 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx
|
||||||
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||||
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
|
||||||
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
|
||||||
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
|
||||||
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
|
||||||
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
|
||||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||||
golang.org/x/net v0.0.0-20220907135653-1e95f45603a7/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
|
||||||
golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||||
golang.org/x/net v0.0.0-20220920203100-d0c6ba3f52d9 h1:asZqf0wXastQr+DudYagQS8uBO8bHKeYD1vbAvGmFL8=
|
|
||||||
golang.org/x/net v0.0.0-20220920203100-d0c6ba3f52d9/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
|
@ -1020,20 +987,11 @@ golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ
|
||||||
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||||
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
golang.org/x/oauth2 v0.1.0 h1:isLCZuhj4v+tYv7eskaN4v/TM+A1begWWgyVJDdl1+Y=
|
||||||
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A=
|
||||||
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1 h1:lxqLZaMad/dJHMFZH0NiNpiEZI/nhgWhe4wgzpE+MuA=
|
|
||||||
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
|
|
||||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
@ -1046,10 +1004,9 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
|
||||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
|
||||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804 h1:0SH2R3f1b1VmIMG7BXbEZCBUu2dKmHschSmjqGUrW8A=
|
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
|
||||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
@ -1103,51 +1060,34 @@ golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||||
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220908150016-7ac13a9a928d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
|
||||||
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8 h1:h+EGohizhe9XlX18rfpa8k8RAc5XyaeamM+0VHRd4lc=
|
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
|
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
|
golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw=
|
||||||
|
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
|
@ -1156,14 +1096,15 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
|
|
||||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||||
|
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
|
||||||
|
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20220920022843-2ce7c2934d45 h1:yuLAip3bfURHClMG9VBdzPrQvCWjWiWUTBGV+/fCbUs=
|
golang.org/x/time v0.1.0 h1:xYY+Bajn2a7VBmTM5GikTmnK8ZuX8YgnQCqZpbBNtmA=
|
||||||
golang.org/x/time v0.0.0-20220920022843-2ce7c2934d45/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.1.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
|
@ -1223,22 +1164,16 @@ golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4f
|
||||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
|
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
|
||||||
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|
||||||
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|
||||||
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|
||||||
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|
||||||
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||||
golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||||
golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
|
golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
|
||||||
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
|
|
||||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||||
|
golang.org/x/tools v0.2.0 h1:G6AHpWxTMGY1KyEYoAQ5WTtIekUUvDNjan3ugu60JvE=
|
||||||
|
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
|
||||||
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
|
|
||||||
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
|
|
||||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
|
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
|
||||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
|
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
|
||||||
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
|
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
|
||||||
|
@ -1259,28 +1194,8 @@ google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M
|
||||||
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
|
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
|
||||||
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
|
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
|
||||||
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
|
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
|
||||||
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
|
google.golang.org/api v0.102.0 h1:JxJl2qQ85fRMPNvlZY/enexbxpCjLwGhZUtgfGeQ51I=
|
||||||
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
|
google.golang.org/api v0.102.0/go.mod h1:3VFl6/fzoA+qNuS1N1/VfXY4LjoXN/wzeIp7TweWwGo=
|
||||||
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
|
|
||||||
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
|
|
||||||
google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
|
|
||||||
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
|
|
||||||
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
|
|
||||||
google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
|
|
||||||
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
|
|
||||||
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
|
|
||||||
google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
|
|
||||||
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
|
|
||||||
google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
|
|
||||||
google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA=
|
|
||||||
google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8=
|
|
||||||
google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs=
|
|
||||||
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
|
|
||||||
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
|
|
||||||
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
|
|
||||||
google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
|
|
||||||
google.golang.org/api v0.96.0 h1:F60cuQPJq7K7FzsxMYHAUJSiXh2oKctHxBMbDygxhfM=
|
|
||||||
google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
|
|
||||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||||
|
@ -1326,54 +1241,11 @@ google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6D
|
||||||
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||||
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||||
google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||||
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
|
||||||
google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||||
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
|
||||||
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
|
||||||
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
|
||||||
google.golang.org/genproto v0.0.0-20210329143202-679c6ae281ee/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
|
|
||||||
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
|
|
||||||
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
|
|
||||||
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
|
|
||||||
google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
|
|
||||||
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
|
|
||||||
google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
|
|
||||||
google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
|
|
||||||
google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
|
|
||||||
google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
|
|
||||||
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
|
|
||||||
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
|
|
||||||
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
|
||||||
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
|
||||||
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
|
||||||
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
|
||||||
google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
|
||||||
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||||
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
|
||||||
google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
|
||||||
google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
|
||||||
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
|
||||||
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
|
||||||
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
|
|
||||||
google.golang.org/genproto v0.0.0-20220329172620-7be39ac1afc7/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
google.golang.org/genproto v0.0.0-20220329172620-7be39ac1afc7/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
||||||
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c h1:QgY/XxIAIeccR+Ca/rDdKubLIU9rcJ3xfy1DC/Wd2Oo=
|
||||||
google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c/go.mod h1:CGI5F/G+E5bKwmfYo09AXuVN4dD894kIKUFmVbP2/Fo=
|
||||||
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
|
||||||
google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
|
||||||
google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
|
|
||||||
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
|
||||||
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
|
||||||
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
|
|
||||||
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
|
||||||
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
|
||||||
google.golang.org/genproto v0.0.0-20220624142145-8cd45d7dbd1f/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
|
||||||
google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006 h1:mmbq5q8M1t7dhkLw320YK4PsOXm6jdnUAkErImaIqOg=
|
|
||||||
google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006/go.mod h1:ht8XFiar2npT/g4vkk7O0WYS1sHOHbdujxbEp7CJWbw=
|
|
||||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||||
|
@ -1396,23 +1268,11 @@ google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv
|
||||||
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
|
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
|
||||||
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||||
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||||
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
|
||||||
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
|
||||||
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
|
||||||
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
|
||||||
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
|
|
||||||
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
|
|
||||||
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
|
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
|
||||||
google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
|
|
||||||
google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
|
google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
|
||||||
google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
|
|
||||||
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
|
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
|
||||||
google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
|
google.golang.org/grpc v1.50.1 h1:DS/BukOZWp8s6p4Dt/tOaJaTQyPyOoCcrjroHuCeLzY=
|
||||||
google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
|
google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
|
||||||
google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
|
|
||||||
google.golang.org/grpc v1.49.0 h1:WTLtQzmQori5FUH25Pq4WT22oCsv8USpQ+F6rqtsmxw=
|
|
||||||
google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
|
|
||||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
|
|
||||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||||
|
@ -1443,6 +1303,7 @@ gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMy
|
||||||
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
|
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
|
||||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||||
|
gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||||
gopkg.in/ini.v1 v1.66.6 h1:LATuAqN/shcYAOkv3wl2L4rkaKqkcgTBQjOyYDvcPKI=
|
gopkg.in/ini.v1 v1.66.6 h1:LATuAqN/shcYAOkv3wl2L4rkaKqkcgTBQjOyYDvcPKI=
|
||||||
gopkg.in/ini.v1 v1.66.6/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
gopkg.in/ini.v1 v1.66.6/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||||
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
||||||
|
@ -1476,12 +1337,12 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
|
||||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||||
k8s.io/api v0.25.2 h1:v6G8RyFcwf0HR5jQGIAYlvtRNrxMJQG1xJzaSeVnIS8=
|
k8s.io/api v0.25.3 h1:Q1v5UFfYe87vi5H7NU0p4RXC26PPMT8KOpr1TLQbCMQ=
|
||||||
k8s.io/api v0.25.2/go.mod h1:qP1Rn4sCVFwx/xIhe+we2cwBLTXNcheRyYXwajonhy0=
|
k8s.io/api v0.25.3/go.mod h1:o42gKscFrEVjHdQnyRenACrMtbuJsVdP+WVjqejfzmI=
|
||||||
k8s.io/apimachinery v0.25.2 h1:WbxfAjCx+AeN8Ilp9joWnyJ6xu9OMeS/fsfjK/5zaQs=
|
k8s.io/apimachinery v0.25.3 h1:7o9ium4uyUOM76t6aunP0nZuex7gDf8VGwkR5RcJnQc=
|
||||||
k8s.io/apimachinery v0.25.2/go.mod h1:hqqA1X0bsgsxI6dXsJ4HnNTBOmJNxyPp8dw3u2fSHwA=
|
k8s.io/apimachinery v0.25.3/go.mod h1:jaF9C/iPNM1FuLl7Zuy5b9v+n35HGSh6AQ4HYRkCqwo=
|
||||||
k8s.io/client-go v0.25.2 h1:SUPp9p5CwM0yXGQrwYurw9LWz+YtMwhWd0GqOsSiefo=
|
k8s.io/client-go v0.25.3 h1:oB4Dyl8d6UbfDHD8Bv8evKylzs3BXzzufLiO27xuPs0=
|
||||||
k8s.io/client-go v0.25.2/go.mod h1:i7cNU7N+yGQmJkewcRD2+Vuj4iz7b30kI8OcL3horQ4=
|
k8s.io/client-go v0.25.3/go.mod h1:t39LPczAIMwycjcXkVc+CB+PZV69jQuNx4um5ORDjQA=
|
||||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 h1:MQ8BAZPZlWk3S9K4a9NCkIFQtZShWqoha7snGixVgEA=
|
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 h1:MQ8BAZPZlWk3S9K4a9NCkIFQtZShWqoha7snGixVgEA=
|
||||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1/go.mod h1:C/N6wCaBHeBHkHUesQOQy2/MZqGgMAFPqGsGQLdbZBU=
|
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1/go.mod h1:C/N6wCaBHeBHkHUesQOQy2/MZqGgMAFPqGsGQLdbZBU=
|
||||||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed h1:jAne/RjBTyawwAy0utX5eqigAwz/lQhTmy+Hr/Cpue4=
|
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed h1:jAne/RjBTyawwAy0utX5eqigAwz/lQhTmy+Hr/Cpue4=
|
||||||
|
|
871
model/histogram/float_histogram.go
Normal file
871
model/histogram/float_histogram.go
Normal file
|
@ -0,0 +1,871 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package histogram
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FloatHistogram is similar to Histogram but uses float64 for all
|
||||||
|
// counts. Additionally, bucket counts are absolute and not deltas.
|
||||||
|
//
|
||||||
|
// A FloatHistogram is needed by PromQL to handle operations that might result
|
||||||
|
// in fractional counts. Since the counts in a histogram are unlikely to be too
|
||||||
|
// large to be represented precisely by a float64, a FloatHistogram can also be
|
||||||
|
// used to represent a histogram with integer counts and thus serves as a more
|
||||||
|
// generalized representation.
|
||||||
|
type FloatHistogram struct {
|
||||||
|
// Currently valid schema numbers are -4 <= n <= 8. They are all for
|
||||||
|
// base-2 bucket schemas, where 1 is a bucket boundary in each case, and
|
||||||
|
// then each power of two is divided into 2^n logarithmic buckets. Or
|
||||||
|
// in other words, each bucket boundary is the previous boundary times
|
||||||
|
// 2^(2^-n).
|
||||||
|
Schema int32
|
||||||
|
// Width of the zero bucket.
|
||||||
|
ZeroThreshold float64
|
||||||
|
// Observations falling into the zero bucket. Must be zero or positive.
|
||||||
|
ZeroCount float64
|
||||||
|
// Total number of observations. Must be zero or positive.
|
||||||
|
Count float64
|
||||||
|
// Sum of observations. This is also used as the stale marker.
|
||||||
|
Sum float64
|
||||||
|
// Spans for positive and negative buckets (see Span below).
|
||||||
|
PositiveSpans, NegativeSpans []Span
|
||||||
|
// Observation counts in buckets. Each represents an absolute count and
|
||||||
|
// must be zero or positive.
|
||||||
|
PositiveBuckets, NegativeBuckets []float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy returns a deep copy of the Histogram.
|
||||||
|
func (h *FloatHistogram) Copy() *FloatHistogram {
|
||||||
|
c := *h
|
||||||
|
|
||||||
|
if h.PositiveSpans != nil {
|
||||||
|
c.PositiveSpans = make([]Span, len(h.PositiveSpans))
|
||||||
|
copy(c.PositiveSpans, h.PositiveSpans)
|
||||||
|
}
|
||||||
|
if h.NegativeSpans != nil {
|
||||||
|
c.NegativeSpans = make([]Span, len(h.NegativeSpans))
|
||||||
|
copy(c.NegativeSpans, h.NegativeSpans)
|
||||||
|
}
|
||||||
|
if h.PositiveBuckets != nil {
|
||||||
|
c.PositiveBuckets = make([]float64, len(h.PositiveBuckets))
|
||||||
|
copy(c.PositiveBuckets, h.PositiveBuckets)
|
||||||
|
}
|
||||||
|
if h.NegativeBuckets != nil {
|
||||||
|
c.NegativeBuckets = make([]float64, len(h.NegativeBuckets))
|
||||||
|
copy(c.NegativeBuckets, h.NegativeBuckets)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &c
|
||||||
|
}
|
||||||
|
|
||||||
|
// CopyToSchema works like Copy, but the returned deep copy has the provided
|
||||||
|
// target schema, which must be ≤ the original schema (i.e. it must have a lower
|
||||||
|
// resolution).
|
||||||
|
func (h *FloatHistogram) CopyToSchema(targetSchema int32) *FloatHistogram {
|
||||||
|
if targetSchema == h.Schema {
|
||||||
|
// Fast path.
|
||||||
|
return h.Copy()
|
||||||
|
}
|
||||||
|
if targetSchema > h.Schema {
|
||||||
|
panic(fmt.Errorf("cannot copy from schema %d to %d", h.Schema, targetSchema))
|
||||||
|
}
|
||||||
|
c := FloatHistogram{
|
||||||
|
Schema: targetSchema,
|
||||||
|
ZeroThreshold: h.ZeroThreshold,
|
||||||
|
ZeroCount: h.ZeroCount,
|
||||||
|
Count: h.Count,
|
||||||
|
Sum: h.Sum,
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO(beorn7): This is a straight-forward implementation using merging
|
||||||
|
// iterators for the original buckets and then adding one merged bucket
|
||||||
|
// after another to the newly created FloatHistogram. It's well possible
|
||||||
|
// that a more involved implementation performs much better, which we
|
||||||
|
// could do if this code path turns out to be performance-critical.
|
||||||
|
var iInSpan, index int32
|
||||||
|
for iSpan, iBucket, it := -1, -1, h.floatBucketIterator(true, 0, targetSchema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
c.PositiveSpans, c.PositiveBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, c.PositiveSpans, c.PositiveBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
for iSpan, iBucket, it := -1, -1, h.floatBucketIterator(false, 0, targetSchema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
c.NegativeSpans, c.NegativeBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, c.NegativeSpans, c.NegativeBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
|
||||||
|
return &c
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a string representation of the Histogram.
|
||||||
|
func (h *FloatHistogram) String() string {
|
||||||
|
var sb strings.Builder
|
||||||
|
fmt.Fprintf(&sb, "{count:%g, sum:%g", h.Count, h.Sum)
|
||||||
|
|
||||||
|
var nBuckets []Bucket[float64]
|
||||||
|
for it := h.NegativeBucketIterator(); it.Next(); {
|
||||||
|
bucket := it.At()
|
||||||
|
if bucket.Count != 0 {
|
||||||
|
nBuckets = append(nBuckets, it.At())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i := len(nBuckets) - 1; i >= 0; i-- {
|
||||||
|
fmt.Fprintf(&sb, ", %s", nBuckets[i].String())
|
||||||
|
}
|
||||||
|
|
||||||
|
if h.ZeroCount != 0 {
|
||||||
|
fmt.Fprintf(&sb, ", %s", h.ZeroBucket().String())
|
||||||
|
}
|
||||||
|
|
||||||
|
for it := h.PositiveBucketIterator(); it.Next(); {
|
||||||
|
bucket := it.At()
|
||||||
|
if bucket.Count != 0 {
|
||||||
|
fmt.Fprintf(&sb, ", %s", bucket.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sb.WriteRune('}')
|
||||||
|
return sb.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ZeroBucket returns the zero bucket.
|
||||||
|
func (h *FloatHistogram) ZeroBucket() Bucket[float64] {
|
||||||
|
return Bucket[float64]{
|
||||||
|
Lower: -h.ZeroThreshold,
|
||||||
|
Upper: h.ZeroThreshold,
|
||||||
|
LowerInclusive: true,
|
||||||
|
UpperInclusive: true,
|
||||||
|
Count: h.ZeroCount,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scale scales the FloatHistogram by the provided factor, i.e. it scales all
|
||||||
|
// bucket counts including the zero bucket and the count and the sum of
|
||||||
|
// observations. The bucket layout stays the same. This method changes the
|
||||||
|
// receiving histogram directly (rather than acting on a copy). It returns a
|
||||||
|
// pointer to the receiving histogram for convenience.
|
||||||
|
func (h *FloatHistogram) Scale(factor float64) *FloatHistogram {
|
||||||
|
h.ZeroCount *= factor
|
||||||
|
h.Count *= factor
|
||||||
|
h.Sum *= factor
|
||||||
|
for i := range h.PositiveBuckets {
|
||||||
|
h.PositiveBuckets[i] *= factor
|
||||||
|
}
|
||||||
|
for i := range h.NegativeBuckets {
|
||||||
|
h.NegativeBuckets[i] *= factor
|
||||||
|
}
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add adds the provided other histogram to the receiving histogram. Count, Sum,
|
||||||
|
// and buckets from the other histogram are added to the corresponding
|
||||||
|
// components of the receiving histogram. Buckets in the other histogram that do
|
||||||
|
// not exist in the receiving histogram are inserted into the latter. The
|
||||||
|
// resulting histogram might have buckets with a population of zero or directly
|
||||||
|
// adjacent spans (offset=0). To normalize those, call the Compact method.
|
||||||
|
//
|
||||||
|
// The method reconciles differences in the zero threshold and in the schema,
|
||||||
|
// but the schema of the other histogram must be ≥ the schema of the receiving
|
||||||
|
// histogram (i.e. must have an equal or higher resolution). This means that the
|
||||||
|
// schema of the receiving histogram won't change. Its zero threshold, however,
|
||||||
|
// will change if needed. The other histogram will not be modified in any case.
|
||||||
|
//
|
||||||
|
// This method returns a pointer to the receiving histogram for convenience.
|
||||||
|
func (h *FloatHistogram) Add(other *FloatHistogram) *FloatHistogram {
|
||||||
|
otherZeroCount := h.reconcileZeroBuckets(other)
|
||||||
|
h.ZeroCount += otherZeroCount
|
||||||
|
h.Count += other.Count
|
||||||
|
h.Sum += other.Sum
|
||||||
|
|
||||||
|
// TODO(beorn7): If needed, this can be optimized by inspecting the
|
||||||
|
// spans in other and create missing buckets in h in batches.
|
||||||
|
var iInSpan, index int32
|
||||||
|
for iSpan, iBucket, it := -1, -1, other.floatBucketIterator(true, h.ZeroThreshold, h.Schema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
h.PositiveSpans, h.PositiveBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, h.PositiveSpans, h.PositiveBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
for iSpan, iBucket, it := -1, -1, other.floatBucketIterator(false, h.ZeroThreshold, h.Schema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
h.NegativeSpans, h.NegativeBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, h.NegativeSpans, h.NegativeBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sub works like Add but subtracts the other histogram.
|
||||||
|
func (h *FloatHistogram) Sub(other *FloatHistogram) *FloatHistogram {
|
||||||
|
otherZeroCount := h.reconcileZeroBuckets(other)
|
||||||
|
h.ZeroCount -= otherZeroCount
|
||||||
|
h.Count -= other.Count
|
||||||
|
h.Sum -= other.Sum
|
||||||
|
|
||||||
|
// TODO(beorn7): If needed, this can be optimized by inspecting the
|
||||||
|
// spans in other and create missing buckets in h in batches.
|
||||||
|
var iInSpan, index int32
|
||||||
|
for iSpan, iBucket, it := -1, -1, other.floatBucketIterator(true, h.ZeroThreshold, h.Schema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
b.Count *= -1
|
||||||
|
h.PositiveSpans, h.PositiveBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, h.PositiveSpans, h.PositiveBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
for iSpan, iBucket, it := -1, -1, other.floatBucketIterator(false, h.ZeroThreshold, h.Schema); it.Next(); {
|
||||||
|
b := it.At()
|
||||||
|
b.Count *= -1
|
||||||
|
h.NegativeSpans, h.NegativeBuckets, iSpan, iBucket, iInSpan = addBucket(
|
||||||
|
b, h.NegativeSpans, h.NegativeBuckets, iSpan, iBucket, iInSpan, index,
|
||||||
|
)
|
||||||
|
index = b.Index
|
||||||
|
}
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// addBucket takes the "coordinates" of the last bucket that was handled and
|
||||||
|
// adds the provided bucket after it. If a corresponding bucket exists, the
|
||||||
|
// count is added. If not, the bucket is inserted. The updated slices and the
|
||||||
|
// coordinates of the inserted or added-to bucket are returned.
|
||||||
|
func addBucket(
|
||||||
|
b Bucket[float64],
|
||||||
|
spans []Span, buckets []float64,
|
||||||
|
iSpan, iBucket int,
|
||||||
|
iInSpan, index int32,
|
||||||
|
) (
|
||||||
|
newSpans []Span, newBuckets []float64,
|
||||||
|
newISpan, newIBucket int, newIInSpan int32,
|
||||||
|
) {
|
||||||
|
if iSpan == -1 {
|
||||||
|
// First add, check if it is before all spans.
|
||||||
|
if len(spans) == 0 || spans[0].Offset > b.Index {
|
||||||
|
// Add bucket before all others.
|
||||||
|
buckets = append(buckets, 0)
|
||||||
|
copy(buckets[1:], buckets)
|
||||||
|
buckets[0] = b.Count
|
||||||
|
if len(spans) > 0 && spans[0].Offset == b.Index+1 {
|
||||||
|
spans[0].Length++
|
||||||
|
spans[0].Offset--
|
||||||
|
return spans, buckets, 0, 0, 0
|
||||||
|
}
|
||||||
|
spans = append(spans, Span{})
|
||||||
|
copy(spans[1:], spans)
|
||||||
|
spans[0] = Span{Offset: b.Index, Length: 1}
|
||||||
|
if len(spans) > 1 {
|
||||||
|
// Convert the absolute offset in the formerly
|
||||||
|
// first span to a relative offset.
|
||||||
|
spans[1].Offset -= b.Index + 1
|
||||||
|
}
|
||||||
|
return spans, buckets, 0, 0, 0
|
||||||
|
}
|
||||||
|
if spans[0].Offset == b.Index {
|
||||||
|
// Just add to first bucket.
|
||||||
|
buckets[0] += b.Count
|
||||||
|
return spans, buckets, 0, 0, 0
|
||||||
|
}
|
||||||
|
// We are behind the first bucket, so set everything to the
|
||||||
|
// first bucket and continue normally.
|
||||||
|
iSpan, iBucket, iInSpan = 0, 0, 0
|
||||||
|
index = spans[0].Offset
|
||||||
|
}
|
||||||
|
deltaIndex := b.Index - index
|
||||||
|
for {
|
||||||
|
remainingInSpan := int32(spans[iSpan].Length) - iInSpan
|
||||||
|
if deltaIndex < remainingInSpan {
|
||||||
|
// Bucket is in current span.
|
||||||
|
iBucket += int(deltaIndex)
|
||||||
|
iInSpan += deltaIndex
|
||||||
|
buckets[iBucket] += b.Count
|
||||||
|
return spans, buckets, iSpan, iBucket, iInSpan
|
||||||
|
}
|
||||||
|
deltaIndex -= remainingInSpan
|
||||||
|
iBucket += int(remainingInSpan)
|
||||||
|
iSpan++
|
||||||
|
if iSpan == len(spans) || deltaIndex < spans[iSpan].Offset {
|
||||||
|
// Bucket is in gap behind previous span (or there are no further spans).
|
||||||
|
buckets = append(buckets, 0)
|
||||||
|
copy(buckets[iBucket+1:], buckets[iBucket:])
|
||||||
|
buckets[iBucket] = b.Count
|
||||||
|
if deltaIndex == 0 {
|
||||||
|
// Directly after previous span, extend previous span.
|
||||||
|
if iSpan < len(spans) {
|
||||||
|
spans[iSpan].Offset--
|
||||||
|
}
|
||||||
|
iSpan--
|
||||||
|
iInSpan = int32(spans[iSpan].Length)
|
||||||
|
spans[iSpan].Length++
|
||||||
|
return spans, buckets, iSpan, iBucket, iInSpan
|
||||||
|
}
|
||||||
|
if iSpan < len(spans) && deltaIndex == spans[iSpan].Offset-1 {
|
||||||
|
// Directly before next span, extend next span.
|
||||||
|
iInSpan = 0
|
||||||
|
spans[iSpan].Offset--
|
||||||
|
spans[iSpan].Length++
|
||||||
|
return spans, buckets, iSpan, iBucket, iInSpan
|
||||||
|
}
|
||||||
|
// No next span, or next span is not directly adjacent to new bucket.
|
||||||
|
// Add new span.
|
||||||
|
iInSpan = 0
|
||||||
|
if iSpan < len(spans) {
|
||||||
|
spans[iSpan].Offset -= deltaIndex + 1
|
||||||
|
}
|
||||||
|
spans = append(spans, Span{})
|
||||||
|
copy(spans[iSpan+1:], spans[iSpan:])
|
||||||
|
spans[iSpan] = Span{Length: 1, Offset: deltaIndex}
|
||||||
|
return spans, buckets, iSpan, iBucket, iInSpan
|
||||||
|
}
|
||||||
|
// Try start of next span.
|
||||||
|
deltaIndex -= spans[iSpan].Offset
|
||||||
|
iInSpan = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compact eliminates empty buckets at the beginning and end of each span, then
|
||||||
|
// merges spans that are consecutive or at most maxEmptyBuckets apart, and
|
||||||
|
// finally splits spans that contain more consecutive empty buckets than
|
||||||
|
// maxEmptyBuckets. (The actual implementation might do something more efficient
|
||||||
|
// but with the same result.) The compaction happens "in place" in the
|
||||||
|
// receiving histogram, but a pointer to it is returned for convenience.
|
||||||
|
//
|
||||||
|
// The ideal value for maxEmptyBuckets depends on circumstances. The motivation
|
||||||
|
// to set maxEmptyBuckets > 0 is the assumption that is is less overhead to
|
||||||
|
// represent very few empty buckets explicitly within one span than cutting the
|
||||||
|
// one span into two to treat the empty buckets as a gap between the two spans,
|
||||||
|
// both in terms of storage requirement as well as in terms of encoding and
|
||||||
|
// decoding effort. However, the tradeoffs are subtle. For one, they are
|
||||||
|
// different in the exposition format vs. in a TSDB chunk vs. for the in-memory
|
||||||
|
// representation as Go types. In the TSDB, as an additional aspects, the span
|
||||||
|
// layout is only stored once per chunk, while many histograms with that same
|
||||||
|
// chunk layout are then only stored with their buckets (so that even a single
|
||||||
|
// empty bucket will be stored many times).
|
||||||
|
//
|
||||||
|
// For the Go types, an additional Span takes 8 bytes. Similarly, an additional
|
||||||
|
// bucket takes 8 bytes. Therefore, with a single separating empty bucket, both
|
||||||
|
// options have the same storage requirement, but the single-span solution is
|
||||||
|
// easier to iterate through. Still, the safest bet is to use maxEmptyBuckets==0
|
||||||
|
// and only use a larger number if you know what you are doing.
|
||||||
|
func (h *FloatHistogram) Compact(maxEmptyBuckets int) *FloatHistogram {
|
||||||
|
h.PositiveBuckets, h.PositiveSpans = compactBuckets(
|
||||||
|
h.PositiveBuckets, h.PositiveSpans, maxEmptyBuckets, false,
|
||||||
|
)
|
||||||
|
h.NegativeBuckets, h.NegativeSpans = compactBuckets(
|
||||||
|
h.NegativeBuckets, h.NegativeSpans, maxEmptyBuckets, false,
|
||||||
|
)
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// DetectReset returns true if the receiving histogram is missing any buckets
|
||||||
|
// that have a non-zero population in the provided previous histogram. It also
|
||||||
|
// returns true if any count (in any bucket, in the zero count, or in the count
|
||||||
|
// of observations, but NOT the sum of observations) is smaller in the receiving
|
||||||
|
// histogram compared to the previous histogram. Otherwise, it returns false.
|
||||||
|
//
|
||||||
|
// Special behavior in case the Schema or the ZeroThreshold are not the same in
|
||||||
|
// both histograms:
|
||||||
|
//
|
||||||
|
// - A decrease of the ZeroThreshold or an increase of the Schema (i.e. an
|
||||||
|
// increase of resolution) can only happen together with a reset. Thus, the
|
||||||
|
// method returns true in either case.
|
||||||
|
//
|
||||||
|
// - Upon an increase of the ZeroThreshold, the buckets in the previous
|
||||||
|
// histogram that fall within the new ZeroThreshold are added to the ZeroCount
|
||||||
|
// of the previous histogram (without mutating the provided previous
|
||||||
|
// histogram). The scenario that a populated bucket of the previous histogram
|
||||||
|
// is partially within, partially outside of the new ZeroThreshold, can only
|
||||||
|
// happen together with a counter reset and therefore shortcuts to returning
|
||||||
|
// true.
|
||||||
|
//
|
||||||
|
// - Upon a decrease of the Schema, the buckets of the previous histogram are
|
||||||
|
// merged so that they match the new, lower-resolution schema (again without
|
||||||
|
// mutating the provided previous histogram).
|
||||||
|
//
|
||||||
|
// Note that this kind of reset detection is quite expensive. Ideally, resets
|
||||||
|
// are detected at ingest time and stored in the TSDB, so that the reset
|
||||||
|
// information can be read directly from there rather than be detected each time
|
||||||
|
// again.
|
||||||
|
func (h *FloatHistogram) DetectReset(previous *FloatHistogram) bool {
|
||||||
|
if h.Count < previous.Count {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if h.Schema > previous.Schema {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if h.ZeroThreshold < previous.ZeroThreshold {
|
||||||
|
// ZeroThreshold decreased.
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
previousZeroCount, newThreshold := previous.zeroCountForLargerThreshold(h.ZeroThreshold)
|
||||||
|
if newThreshold != h.ZeroThreshold {
|
||||||
|
// ZeroThreshold is within a populated bucket in previous
|
||||||
|
// histogram.
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if h.ZeroCount < previousZeroCount {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
currIt := h.floatBucketIterator(true, h.ZeroThreshold, h.Schema)
|
||||||
|
prevIt := previous.floatBucketIterator(true, h.ZeroThreshold, h.Schema)
|
||||||
|
if detectReset(currIt, prevIt) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
currIt = h.floatBucketIterator(false, h.ZeroThreshold, h.Schema)
|
||||||
|
prevIt = previous.floatBucketIterator(false, h.ZeroThreshold, h.Schema)
|
||||||
|
return detectReset(currIt, prevIt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func detectReset(currIt, prevIt BucketIterator[float64]) bool {
|
||||||
|
if !prevIt.Next() {
|
||||||
|
return false // If no buckets in previous histogram, nothing can be reset.
|
||||||
|
}
|
||||||
|
prevBucket := prevIt.At()
|
||||||
|
if !currIt.Next() {
|
||||||
|
// No bucket in current, but at least one in previous
|
||||||
|
// histogram. Check if any of those are non-zero, in which case
|
||||||
|
// this is a reset.
|
||||||
|
for {
|
||||||
|
if prevBucket.Count != 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if !prevIt.Next() {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
currBucket := currIt.At()
|
||||||
|
for {
|
||||||
|
// Forward currIt until we find the bucket corresponding to prevBucket.
|
||||||
|
for currBucket.Index < prevBucket.Index {
|
||||||
|
if !currIt.Next() {
|
||||||
|
// Reached end of currIt early, therefore
|
||||||
|
// previous histogram has a bucket that the
|
||||||
|
// current one does not have. Unlass all
|
||||||
|
// remaining buckets in the previous histogram
|
||||||
|
// are unpopulated, this is a reset.
|
||||||
|
for {
|
||||||
|
if prevBucket.Count != 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if !prevIt.Next() {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
currBucket = currIt.At()
|
||||||
|
}
|
||||||
|
if currBucket.Index > prevBucket.Index {
|
||||||
|
// Previous histogram has a bucket the current one does
|
||||||
|
// not have. If it's populated, it's a reset.
|
||||||
|
if prevBucket.Count != 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// We have reached corresponding buckets in both iterators.
|
||||||
|
// We can finally compare the counts.
|
||||||
|
if currBucket.Count < prevBucket.Count {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !prevIt.Next() {
|
||||||
|
// Reached end of prevIt without finding offending buckets.
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
prevBucket = prevIt.At()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// PositiveBucketIterator returns a BucketIterator to iterate over all positive
|
||||||
|
// buckets in ascending order (starting next to the zero bucket and going up).
|
||||||
|
func (h *FloatHistogram) PositiveBucketIterator() BucketIterator[float64] {
|
||||||
|
return h.floatBucketIterator(true, 0, h.Schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NegativeBucketIterator returns a BucketIterator to iterate over all negative
|
||||||
|
// buckets in descending order (starting next to the zero bucket and going
|
||||||
|
// down).
|
||||||
|
func (h *FloatHistogram) NegativeBucketIterator() BucketIterator[float64] {
|
||||||
|
return h.floatBucketIterator(false, 0, h.Schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
// PositiveReverseBucketIterator returns a BucketIterator to iterate over all
|
||||||
|
// positive buckets in descending order (starting at the highest bucket and
|
||||||
|
// going down towards the zero bucket).
|
||||||
|
func (h *FloatHistogram) PositiveReverseBucketIterator() BucketIterator[float64] {
|
||||||
|
return newReverseFloatBucketIterator(h.PositiveSpans, h.PositiveBuckets, h.Schema, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NegativeReverseBucketIterator returns a BucketIterator to iterate over all
|
||||||
|
// negative buckets in ascending order (starting at the lowest bucket and going
|
||||||
|
// up towards the zero bucket).
|
||||||
|
func (h *FloatHistogram) NegativeReverseBucketIterator() BucketIterator[float64] {
|
||||||
|
return newReverseFloatBucketIterator(h.NegativeSpans, h.NegativeBuckets, h.Schema, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllBucketIterator returns a BucketIterator to iterate over all negative,
|
||||||
|
// zero, and positive buckets in ascending order (starting at the lowest bucket
|
||||||
|
// and going up). If the highest negative bucket or the lowest positive bucket
|
||||||
|
// overlap with the zero bucket, their upper or lower boundary, respectively, is
|
||||||
|
// set to the zero threshold.
|
||||||
|
func (h *FloatHistogram) AllBucketIterator() BucketIterator[float64] {
|
||||||
|
return &allFloatBucketIterator{
|
||||||
|
h: h,
|
||||||
|
negIter: h.NegativeReverseBucketIterator(),
|
||||||
|
posIter: h.PositiveBucketIterator(),
|
||||||
|
state: -1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// zeroCountForLargerThreshold returns what the histogram's zero count would be
|
||||||
|
// if the ZeroThreshold had the provided larger (or equal) value. If the
|
||||||
|
// provided value is less than the histogram's ZeroThreshold, the method panics.
|
||||||
|
// If the largerThreshold ends up within a populated bucket of the histogram, it
|
||||||
|
// is adjusted upwards to the lower limit of that bucket (all in terms of
|
||||||
|
// absolute values) and that bucket's count is included in the returned
|
||||||
|
// count. The adjusted threshold is returned, too.
|
||||||
|
func (h *FloatHistogram) zeroCountForLargerThreshold(largerThreshold float64) (count, threshold float64) {
|
||||||
|
// Fast path.
|
||||||
|
if largerThreshold == h.ZeroThreshold {
|
||||||
|
return h.ZeroCount, largerThreshold
|
||||||
|
}
|
||||||
|
if largerThreshold < h.ZeroThreshold {
|
||||||
|
panic(fmt.Errorf("new threshold %f is less than old threshold %f", largerThreshold, h.ZeroThreshold))
|
||||||
|
}
|
||||||
|
outer:
|
||||||
|
for {
|
||||||
|
count = h.ZeroCount
|
||||||
|
i := h.PositiveBucketIterator()
|
||||||
|
for i.Next() {
|
||||||
|
b := i.At()
|
||||||
|
if b.Lower >= largerThreshold {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
count += b.Count // Bucket to be merged into zero bucket.
|
||||||
|
if b.Upper > largerThreshold {
|
||||||
|
// New threshold ended up within a bucket. if it's
|
||||||
|
// populated, we need to adjust largerThreshold before
|
||||||
|
// we are done here.
|
||||||
|
if b.Count != 0 {
|
||||||
|
largerThreshold = b.Upper
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
i = h.NegativeBucketIterator()
|
||||||
|
for i.Next() {
|
||||||
|
b := i.At()
|
||||||
|
if b.Upper <= -largerThreshold {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
count += b.Count // Bucket to be merged into zero bucket.
|
||||||
|
if b.Lower < -largerThreshold {
|
||||||
|
// New threshold ended up within a bucket. If
|
||||||
|
// it's populated, we need to adjust
|
||||||
|
// largerThreshold and have to redo the whole
|
||||||
|
// thing because the treatment of the positive
|
||||||
|
// buckets is invalid now.
|
||||||
|
if b.Count != 0 {
|
||||||
|
largerThreshold = -b.Lower
|
||||||
|
continue outer
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return count, largerThreshold
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// trimBucketsInZeroBucket removes all buckets that are within the zero
|
||||||
|
// bucket. It assumes that the zero threshold is at a bucket boundary and that
|
||||||
|
// the counts in the buckets to remove are already part of the zero count.
|
||||||
|
func (h *FloatHistogram) trimBucketsInZeroBucket() {
|
||||||
|
i := h.PositiveBucketIterator()
|
||||||
|
bucketsIdx := 0
|
||||||
|
for i.Next() {
|
||||||
|
b := i.At()
|
||||||
|
if b.Lower >= h.ZeroThreshold {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
h.PositiveBuckets[bucketsIdx] = 0
|
||||||
|
bucketsIdx++
|
||||||
|
}
|
||||||
|
i = h.NegativeBucketIterator()
|
||||||
|
bucketsIdx = 0
|
||||||
|
for i.Next() {
|
||||||
|
b := i.At()
|
||||||
|
if b.Upper <= -h.ZeroThreshold {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
h.NegativeBuckets[bucketsIdx] = 0
|
||||||
|
bucketsIdx++
|
||||||
|
}
|
||||||
|
// We are abusing Compact to trim the buckets set to zero
|
||||||
|
// above. Premature compacting could cause additional cost, but this
|
||||||
|
// code path is probably rarely used anyway.
|
||||||
|
h.Compact(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// reconcileZeroBuckets finds a zero bucket large enough to include the zero
|
||||||
|
// buckets of both histograms (the receiving histogram and the other histogram)
|
||||||
|
// with a zero threshold that is not within a populated bucket in either
|
||||||
|
// histogram. This method modifies the receiving histogram accourdingly, but
|
||||||
|
// leaves the other histogram as is. Instead, it returns the zero count the
|
||||||
|
// other histogram would have if it were modified.
|
||||||
|
func (h *FloatHistogram) reconcileZeroBuckets(other *FloatHistogram) float64 {
|
||||||
|
otherZeroCount := other.ZeroCount
|
||||||
|
otherZeroThreshold := other.ZeroThreshold
|
||||||
|
|
||||||
|
for otherZeroThreshold != h.ZeroThreshold {
|
||||||
|
if h.ZeroThreshold > otherZeroThreshold {
|
||||||
|
otherZeroCount, otherZeroThreshold = other.zeroCountForLargerThreshold(h.ZeroThreshold)
|
||||||
|
}
|
||||||
|
if otherZeroThreshold > h.ZeroThreshold {
|
||||||
|
h.ZeroCount, h.ZeroThreshold = h.zeroCountForLargerThreshold(otherZeroThreshold)
|
||||||
|
h.trimBucketsInZeroBucket()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return otherZeroCount
|
||||||
|
}
|
||||||
|
|
||||||
|
// floatBucketIterator is a low-level constructor for bucket iterators.
|
||||||
|
//
|
||||||
|
// If positive is true, the returned iterator iterates through the positive
|
||||||
|
// buckets, otherwise through the negative buckets.
|
||||||
|
//
|
||||||
|
// If absoluteStartValue is < the lowest absolute value of any upper bucket
|
||||||
|
// boundary, the iterator starts with the first bucket. Otherwise, it will skip
|
||||||
|
// all buckets with an absolute value of their upper boundary ≤
|
||||||
|
// absoluteStartValue.
|
||||||
|
//
|
||||||
|
// targetSchema must be ≤ the schema of FloatHistogram (and of course within the
|
||||||
|
// legal values for schemas in general). The buckets are merged to match the
|
||||||
|
// targetSchema prior to iterating (without mutating FloatHistogram).
|
||||||
|
func (h *FloatHistogram) floatBucketIterator(
|
||||||
|
positive bool, absoluteStartValue float64, targetSchema int32,
|
||||||
|
) *floatBucketIterator {
|
||||||
|
if targetSchema > h.Schema {
|
||||||
|
panic(fmt.Errorf("cannot merge from schema %d to %d", h.Schema, targetSchema))
|
||||||
|
}
|
||||||
|
i := &floatBucketIterator{
|
||||||
|
baseBucketIterator: baseBucketIterator[float64, float64]{
|
||||||
|
schema: h.Schema,
|
||||||
|
positive: positive,
|
||||||
|
},
|
||||||
|
targetSchema: targetSchema,
|
||||||
|
absoluteStartValue: absoluteStartValue,
|
||||||
|
}
|
||||||
|
if positive {
|
||||||
|
i.spans = h.PositiveSpans
|
||||||
|
i.buckets = h.PositiveBuckets
|
||||||
|
} else {
|
||||||
|
i.spans = h.NegativeSpans
|
||||||
|
i.buckets = h.NegativeBuckets
|
||||||
|
}
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
|
||||||
|
// reverseFloatbucketiterator is a low-level constructor for reverse bucket iterators.
|
||||||
|
func newReverseFloatBucketIterator(
|
||||||
|
spans []Span, buckets []float64, schema int32, positive bool,
|
||||||
|
) *reverseFloatBucketIterator {
|
||||||
|
r := &reverseFloatBucketIterator{
|
||||||
|
baseBucketIterator: baseBucketIterator[float64, float64]{
|
||||||
|
schema: schema,
|
||||||
|
spans: spans,
|
||||||
|
buckets: buckets,
|
||||||
|
positive: positive,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
r.spansIdx = len(r.spans) - 1
|
||||||
|
r.bucketsIdx = len(r.buckets) - 1
|
||||||
|
if r.spansIdx >= 0 {
|
||||||
|
r.idxInSpan = int32(r.spans[r.spansIdx].Length) - 1
|
||||||
|
}
|
||||||
|
r.currIdx = 0
|
||||||
|
for _, s := range r.spans {
|
||||||
|
r.currIdx += s.Offset + int32(s.Length)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r
|
||||||
|
}
|
||||||
|
|
||||||
|
type floatBucketIterator struct {
|
||||||
|
baseBucketIterator[float64, float64]
|
||||||
|
|
||||||
|
targetSchema int32 // targetSchema is the schema to merge to and must be ≤ schema.
|
||||||
|
origIdx int32 // The bucket index within the original schema.
|
||||||
|
absoluteStartValue float64 // Never return buckets with an upper bound ≤ this value.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *floatBucketIterator) Next() bool {
|
||||||
|
if i.spansIdx >= len(i.spans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy all of these into local variables so that we can forward to the
|
||||||
|
// next bucket and then roll back if needed.
|
||||||
|
origIdx, spansIdx, idxInSpan := i.origIdx, i.spansIdx, i.idxInSpan
|
||||||
|
span := i.spans[spansIdx]
|
||||||
|
firstPass := true
|
||||||
|
i.currCount = 0
|
||||||
|
|
||||||
|
mergeLoop: // Merge together all buckets from the original schema that fall into one bucket in the targetSchema.
|
||||||
|
for {
|
||||||
|
if i.bucketsIdx == 0 {
|
||||||
|
// Seed origIdx for the first bucket.
|
||||||
|
origIdx = span.Offset
|
||||||
|
} else {
|
||||||
|
origIdx++
|
||||||
|
}
|
||||||
|
for idxInSpan >= span.Length {
|
||||||
|
// We have exhausted the current span and have to find a new
|
||||||
|
// one. We even handle pathologic spans of length 0 here.
|
||||||
|
idxInSpan = 0
|
||||||
|
spansIdx++
|
||||||
|
if spansIdx >= len(i.spans) {
|
||||||
|
if firstPass {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
break mergeLoop
|
||||||
|
}
|
||||||
|
span = i.spans[spansIdx]
|
||||||
|
origIdx += span.Offset
|
||||||
|
}
|
||||||
|
currIdx := i.targetIdx(origIdx)
|
||||||
|
if firstPass {
|
||||||
|
i.currIdx = currIdx
|
||||||
|
firstPass = false
|
||||||
|
} else if currIdx != i.currIdx {
|
||||||
|
// Reached next bucket in targetSchema.
|
||||||
|
// Do not actually forward to the next bucket, but break out.
|
||||||
|
break mergeLoop
|
||||||
|
}
|
||||||
|
i.currCount += i.buckets[i.bucketsIdx]
|
||||||
|
idxInSpan++
|
||||||
|
i.bucketsIdx++
|
||||||
|
i.origIdx, i.spansIdx, i.idxInSpan = origIdx, spansIdx, idxInSpan
|
||||||
|
if i.schema == i.targetSchema {
|
||||||
|
// Don't need to test the next bucket for mergeability
|
||||||
|
// if we have no schema change anyway.
|
||||||
|
break mergeLoop
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Skip buckets before absoluteStartValue.
|
||||||
|
// TODO(beorn7): Maybe do something more efficient than this recursive call.
|
||||||
|
if getBound(i.currIdx, i.targetSchema) <= i.absoluteStartValue {
|
||||||
|
return i.Next()
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// targetIdx returns the bucket index within i.targetSchema for the given bucket
|
||||||
|
// index within i.schema.
|
||||||
|
func (i *floatBucketIterator) targetIdx(idx int32) int32 {
|
||||||
|
if i.schema == i.targetSchema {
|
||||||
|
// Fast path for the common case. The below would yield the same
|
||||||
|
// result, just with more effort.
|
||||||
|
return idx
|
||||||
|
}
|
||||||
|
return ((idx - 1) >> (i.schema - i.targetSchema)) + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
type reverseFloatBucketIterator struct {
|
||||||
|
baseBucketIterator[float64, float64]
|
||||||
|
idxInSpan int32 // Changed from uint32 to allow negative values for exhaustion detection.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *reverseFloatBucketIterator) Next() bool {
|
||||||
|
i.currIdx--
|
||||||
|
if i.bucketsIdx < 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
for i.idxInSpan < 0 {
|
||||||
|
// We have exhausted the current span and have to find a new
|
||||||
|
// one. We'll even handle pathologic spans of length 0.
|
||||||
|
i.spansIdx--
|
||||||
|
i.idxInSpan = int32(i.spans[i.spansIdx].Length) - 1
|
||||||
|
i.currIdx -= i.spans[i.spansIdx+1].Offset
|
||||||
|
}
|
||||||
|
|
||||||
|
i.currCount = i.buckets[i.bucketsIdx]
|
||||||
|
i.bucketsIdx--
|
||||||
|
i.idxInSpan--
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
type allFloatBucketIterator struct {
|
||||||
|
h *FloatHistogram
|
||||||
|
negIter, posIter BucketIterator[float64]
|
||||||
|
// -1 means we are iterating negative buckets.
|
||||||
|
// 0 means it is time for the zero bucket.
|
||||||
|
// 1 means we are iterating positive buckets.
|
||||||
|
// Anything else means iteration is over.
|
||||||
|
state int8
|
||||||
|
currBucket Bucket[float64]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *allFloatBucketIterator) Next() bool {
|
||||||
|
switch i.state {
|
||||||
|
case -1:
|
||||||
|
if i.negIter.Next() {
|
||||||
|
i.currBucket = i.negIter.At()
|
||||||
|
if i.currBucket.Upper > -i.h.ZeroThreshold {
|
||||||
|
i.currBucket.Upper = -i.h.ZeroThreshold
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
i.state = 0
|
||||||
|
return i.Next()
|
||||||
|
case 0:
|
||||||
|
i.state = 1
|
||||||
|
if i.h.ZeroCount > 0 {
|
||||||
|
i.currBucket = Bucket[float64]{
|
||||||
|
Lower: -i.h.ZeroThreshold,
|
||||||
|
Upper: i.h.ZeroThreshold,
|
||||||
|
LowerInclusive: true,
|
||||||
|
UpperInclusive: true,
|
||||||
|
Count: i.h.ZeroCount,
|
||||||
|
// Index is irrelevant for the zero bucket.
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return i.Next()
|
||||||
|
case 1:
|
||||||
|
if i.posIter.Next() {
|
||||||
|
i.currBucket = i.posIter.At()
|
||||||
|
if i.currBucket.Lower < i.h.ZeroThreshold {
|
||||||
|
i.currBucket.Lower = i.h.ZeroThreshold
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
i.state = 42
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *allFloatBucketIterator) At() Bucket[float64] {
|
||||||
|
return i.currBucket
|
||||||
|
}
|
1836
model/histogram/float_histogram_test.go
Normal file
1836
model/histogram/float_histogram_test.go
Normal file
File diff suppressed because it is too large
Load diff
536
model/histogram/generic.go
Normal file
536
model/histogram/generic.go
Normal file
|
@ -0,0 +1,536 @@
|
||||||
|
// Copyright 2022 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package histogram
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BucketCount is a type constraint for the count in a bucket, which can be
|
||||||
|
// float64 (for type FloatHistogram) or uint64 (for type Histogram).
|
||||||
|
type BucketCount interface {
|
||||||
|
float64 | uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
// internalBucketCount is used internally by Histogram and FloatHistogram. The
|
||||||
|
// difference to the BucketCount above is that Histogram internally uses deltas
|
||||||
|
// between buckets rather than absolute counts (while FloatHistogram uses
|
||||||
|
// absolute counts directly). Go type parameters don't allow type
|
||||||
|
// specialization. Therefore, where special treatment of deltas between buckets
|
||||||
|
// vs. absolute counts is important, this information has to be provided as a
|
||||||
|
// separate boolean parameter "deltaBuckets"
|
||||||
|
type internalBucketCount interface {
|
||||||
|
float64 | int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bucket represents a bucket with lower and upper limit and the absolute count
|
||||||
|
// of samples in the bucket. It also specifies if each limit is inclusive or
|
||||||
|
// not. (Mathematically, inclusive limits create a closed interval, and
|
||||||
|
// non-inclusive limits an open interval.)
|
||||||
|
//
|
||||||
|
// To represent cumulative buckets, Lower is set to -Inf, and the Count is then
|
||||||
|
// cumulative (including the counts of all buckets for smaller values).
|
||||||
|
type Bucket[BC BucketCount] struct {
|
||||||
|
Lower, Upper float64
|
||||||
|
LowerInclusive, UpperInclusive bool
|
||||||
|
Count BC
|
||||||
|
|
||||||
|
// Index within schema. To easily compare buckets that share the same
|
||||||
|
// schema and sign (positive or negative). Irrelevant for the zero bucket.
|
||||||
|
Index int32
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a string representation of a Bucket, using the usual
|
||||||
|
// mathematical notation of '['/']' for inclusive bounds and '('/')' for
|
||||||
|
// non-inclusive bounds.
|
||||||
|
func (b Bucket[BC]) String() string {
|
||||||
|
var sb strings.Builder
|
||||||
|
if b.LowerInclusive {
|
||||||
|
sb.WriteRune('[')
|
||||||
|
} else {
|
||||||
|
sb.WriteRune('(')
|
||||||
|
}
|
||||||
|
fmt.Fprintf(&sb, "%g,%g", b.Lower, b.Upper)
|
||||||
|
if b.UpperInclusive {
|
||||||
|
sb.WriteRune(']')
|
||||||
|
} else {
|
||||||
|
sb.WriteRune(')')
|
||||||
|
}
|
||||||
|
fmt.Fprintf(&sb, ":%v", b.Count)
|
||||||
|
return sb.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BucketIterator iterates over the buckets of a Histogram, returning decoded
|
||||||
|
// buckets.
|
||||||
|
type BucketIterator[BC BucketCount] interface {
|
||||||
|
// Next advances the iterator by one.
|
||||||
|
Next() bool
|
||||||
|
// At returns the current bucket.
|
||||||
|
At() Bucket[BC]
|
||||||
|
}
|
||||||
|
|
||||||
|
// baseBucketIterator provides a struct that is shared by most BucketIterator
|
||||||
|
// implementations, together with an implementation of the At method. This
|
||||||
|
// iterator can be embedded in full implementations of BucketIterator to save on
|
||||||
|
// code replication.
|
||||||
|
type baseBucketIterator[BC BucketCount, IBC internalBucketCount] struct {
|
||||||
|
schema int32
|
||||||
|
spans []Span
|
||||||
|
buckets []IBC
|
||||||
|
|
||||||
|
positive bool // Whether this is for positive buckets.
|
||||||
|
|
||||||
|
spansIdx int // Current span within spans slice.
|
||||||
|
idxInSpan uint32 // Index in the current span. 0 <= idxInSpan < span.Length.
|
||||||
|
bucketsIdx int // Current bucket within buckets slice.
|
||||||
|
|
||||||
|
currCount IBC // Count in the current bucket.
|
||||||
|
currIdx int32 // The actual bucket index.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b baseBucketIterator[BC, IBC]) At() Bucket[BC] {
|
||||||
|
bucket := Bucket[BC]{
|
||||||
|
Count: BC(b.currCount),
|
||||||
|
Index: b.currIdx,
|
||||||
|
}
|
||||||
|
if b.positive {
|
||||||
|
bucket.Upper = getBound(b.currIdx, b.schema)
|
||||||
|
bucket.Lower = getBound(b.currIdx-1, b.schema)
|
||||||
|
} else {
|
||||||
|
bucket.Lower = -getBound(b.currIdx, b.schema)
|
||||||
|
bucket.Upper = -getBound(b.currIdx-1, b.schema)
|
||||||
|
}
|
||||||
|
bucket.LowerInclusive = bucket.Lower < 0
|
||||||
|
bucket.UpperInclusive = bucket.Upper > 0
|
||||||
|
return bucket
|
||||||
|
}
|
||||||
|
|
||||||
|
// compactBuckets is a generic function used by both Histogram.Compact and
|
||||||
|
// FloatHistogram.Compact. Set deltaBuckets to true if the provided buckets are
|
||||||
|
// deltas. Set it to false if the buckets contain absolute counts.
|
||||||
|
func compactBuckets[IBC internalBucketCount](buckets []IBC, spans []Span, maxEmptyBuckets int, deltaBuckets bool) ([]IBC, []Span) {
|
||||||
|
// Fast path: If there are no empty buckets AND no offset in any span is
|
||||||
|
// <= maxEmptyBuckets AND no span has length 0, there is nothing to do and we can return
|
||||||
|
// immediately. We check that first because it's cheap and presumably
|
||||||
|
// common.
|
||||||
|
nothingToDo := true
|
||||||
|
var currentBucketAbsolute IBC
|
||||||
|
for _, bucket := range buckets {
|
||||||
|
if deltaBuckets {
|
||||||
|
currentBucketAbsolute += bucket
|
||||||
|
} else {
|
||||||
|
currentBucketAbsolute = bucket
|
||||||
|
}
|
||||||
|
if currentBucketAbsolute == 0 {
|
||||||
|
nothingToDo = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if nothingToDo {
|
||||||
|
for _, span := range spans {
|
||||||
|
if int(span.Offset) <= maxEmptyBuckets || span.Length == 0 {
|
||||||
|
nothingToDo = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if nothingToDo {
|
||||||
|
return buckets, spans
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var iBucket, iSpan int
|
||||||
|
var posInSpan uint32
|
||||||
|
currentBucketAbsolute = 0
|
||||||
|
|
||||||
|
// Helper function.
|
||||||
|
emptyBucketsHere := func() int {
|
||||||
|
i := 0
|
||||||
|
abs := currentBucketAbsolute
|
||||||
|
for uint32(i)+posInSpan < spans[iSpan].Length && abs == 0 {
|
||||||
|
i++
|
||||||
|
if i+iBucket >= len(buckets) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
abs = buckets[i+iBucket]
|
||||||
|
}
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge spans with zero-offset to avoid special cases later.
|
||||||
|
if len(spans) > 1 {
|
||||||
|
for i, span := range spans[1:] {
|
||||||
|
if span.Offset == 0 {
|
||||||
|
spans[iSpan].Length += span.Length
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
iSpan++
|
||||||
|
if i+1 != iSpan {
|
||||||
|
spans[iSpan] = span
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spans = spans[:iSpan+1]
|
||||||
|
iSpan = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge spans with zero-length to avoid special cases later.
|
||||||
|
for i, span := range spans {
|
||||||
|
if span.Length == 0 {
|
||||||
|
if i+1 < len(spans) {
|
||||||
|
spans[i+1].Offset += span.Offset
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if i != iSpan {
|
||||||
|
spans[iSpan] = span
|
||||||
|
}
|
||||||
|
iSpan++
|
||||||
|
}
|
||||||
|
spans = spans[:iSpan]
|
||||||
|
iSpan = 0
|
||||||
|
|
||||||
|
// Cut out empty buckets from start and end of spans, no matter
|
||||||
|
// what. Also cut out empty buckets from the middle of a span but only
|
||||||
|
// if there are more than maxEmptyBuckets consecutive empty buckets.
|
||||||
|
for iBucket < len(buckets) {
|
||||||
|
if deltaBuckets {
|
||||||
|
currentBucketAbsolute += buckets[iBucket]
|
||||||
|
} else {
|
||||||
|
currentBucketAbsolute = buckets[iBucket]
|
||||||
|
}
|
||||||
|
if nEmpty := emptyBucketsHere(); nEmpty > 0 {
|
||||||
|
if posInSpan > 0 &&
|
||||||
|
nEmpty < int(spans[iSpan].Length-posInSpan) &&
|
||||||
|
nEmpty <= maxEmptyBuckets {
|
||||||
|
// The empty buckets are in the middle of a
|
||||||
|
// span, and there are few enough to not bother.
|
||||||
|
// Just fast-forward.
|
||||||
|
iBucket += nEmpty
|
||||||
|
if deltaBuckets {
|
||||||
|
currentBucketAbsolute = 0
|
||||||
|
}
|
||||||
|
posInSpan += uint32(nEmpty)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// In all other cases, we cut out the empty buckets.
|
||||||
|
if deltaBuckets && iBucket+nEmpty < len(buckets) {
|
||||||
|
currentBucketAbsolute = -buckets[iBucket]
|
||||||
|
buckets[iBucket+nEmpty] += buckets[iBucket]
|
||||||
|
}
|
||||||
|
buckets = append(buckets[:iBucket], buckets[iBucket+nEmpty:]...)
|
||||||
|
if posInSpan == 0 {
|
||||||
|
// Start of span.
|
||||||
|
if nEmpty == int(spans[iSpan].Length) {
|
||||||
|
// The whole span is empty.
|
||||||
|
offset := spans[iSpan].Offset
|
||||||
|
spans = append(spans[:iSpan], spans[iSpan+1:]...)
|
||||||
|
if len(spans) > iSpan {
|
||||||
|
spans[iSpan].Offset += offset + int32(nEmpty)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
spans[iSpan].Length -= uint32(nEmpty)
|
||||||
|
spans[iSpan].Offset += int32(nEmpty)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// It's in the middle or in the end of the span.
|
||||||
|
// Split the current span.
|
||||||
|
newSpan := Span{
|
||||||
|
Offset: int32(nEmpty),
|
||||||
|
Length: spans[iSpan].Length - posInSpan - uint32(nEmpty),
|
||||||
|
}
|
||||||
|
spans[iSpan].Length = posInSpan
|
||||||
|
// In any case, we have to split to the next span.
|
||||||
|
iSpan++
|
||||||
|
posInSpan = 0
|
||||||
|
if newSpan.Length == 0 {
|
||||||
|
// The span is empty, so we were already at the end of a span.
|
||||||
|
// We don't have to insert the new span, just adjust the next
|
||||||
|
// span's offset, if there is one.
|
||||||
|
if iSpan < len(spans) {
|
||||||
|
spans[iSpan].Offset += int32(nEmpty)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Insert the new span.
|
||||||
|
spans = append(spans, Span{})
|
||||||
|
if iSpan+1 < len(spans) {
|
||||||
|
copy(spans[iSpan+1:], spans[iSpan:])
|
||||||
|
}
|
||||||
|
spans[iSpan] = newSpan
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
iBucket++
|
||||||
|
posInSpan++
|
||||||
|
if posInSpan >= spans[iSpan].Length {
|
||||||
|
posInSpan = 0
|
||||||
|
iSpan++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if maxEmptyBuckets == 0 || len(buckets) == 0 {
|
||||||
|
return buckets, spans
|
||||||
|
}
|
||||||
|
|
||||||
|
// Finally, check if any offsets between spans are small enough to merge
|
||||||
|
// the spans.
|
||||||
|
iBucket = int(spans[0].Length)
|
||||||
|
if deltaBuckets {
|
||||||
|
currentBucketAbsolute = 0
|
||||||
|
for _, bucket := range buckets[:iBucket] {
|
||||||
|
currentBucketAbsolute += bucket
|
||||||
|
}
|
||||||
|
}
|
||||||
|
iSpan = 1
|
||||||
|
for iSpan < len(spans) {
|
||||||
|
if int(spans[iSpan].Offset) > maxEmptyBuckets {
|
||||||
|
l := int(spans[iSpan].Length)
|
||||||
|
if deltaBuckets {
|
||||||
|
for _, bucket := range buckets[iBucket : iBucket+l] {
|
||||||
|
currentBucketAbsolute += bucket
|
||||||
|
}
|
||||||
|
}
|
||||||
|
iBucket += l
|
||||||
|
iSpan++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Merge span with previous one and insert empty buckets.
|
||||||
|
offset := int(spans[iSpan].Offset)
|
||||||
|
spans[iSpan-1].Length += uint32(offset) + spans[iSpan].Length
|
||||||
|
spans = append(spans[:iSpan], spans[iSpan+1:]...)
|
||||||
|
newBuckets := make([]IBC, len(buckets)+offset)
|
||||||
|
copy(newBuckets, buckets[:iBucket])
|
||||||
|
copy(newBuckets[iBucket+offset:], buckets[iBucket:])
|
||||||
|
if deltaBuckets {
|
||||||
|
newBuckets[iBucket] = -currentBucketAbsolute
|
||||||
|
newBuckets[iBucket+offset] += currentBucketAbsolute
|
||||||
|
}
|
||||||
|
iBucket += offset
|
||||||
|
buckets = newBuckets
|
||||||
|
currentBucketAbsolute = buckets[iBucket]
|
||||||
|
// Note that with many merges, it would be more efficient to
|
||||||
|
// first record all the chunks of empty buckets to insert and
|
||||||
|
// then do it in one go through all the buckets.
|
||||||
|
}
|
||||||
|
|
||||||
|
return buckets, spans
|
||||||
|
}
|
||||||
|
|
||||||
|
func getBound(idx, schema int32) float64 {
|
||||||
|
// Here a bit of context about the behavior for the last bucket counting
|
||||||
|
// regular numbers (called simply "last bucket" below) and the bucket
|
||||||
|
// counting observations of ±Inf (called "inf bucket" below, with an idx
|
||||||
|
// one higher than that of the "last bucket"):
|
||||||
|
//
|
||||||
|
// If we apply the usual formula to the last bucket, its upper bound
|
||||||
|
// would be calculated as +Inf. The reason is that the max possible
|
||||||
|
// regular float64 number (math.MaxFloat64) doesn't coincide with one of
|
||||||
|
// the calculated bucket boundaries. So the calculated boundary has to
|
||||||
|
// be larger than math.MaxFloat64, and the only float64 larger than
|
||||||
|
// math.MaxFloat64 is +Inf. However, we want to count actual
|
||||||
|
// observations of ±Inf in the inf bucket. Therefore, we have to treat
|
||||||
|
// the upper bound of the last bucket specially and set it to
|
||||||
|
// math.MaxFloat64. (The upper bound of the inf bucket, with its idx
|
||||||
|
// being one higher than that of the last bucket, naturally comes out as
|
||||||
|
// +Inf by the usual formula. So that's fine.)
|
||||||
|
//
|
||||||
|
// math.MaxFloat64 has a frac of 0.9999999999999999 and an exp of
|
||||||
|
// 1024. If there were a float64 number following math.MaxFloat64, it
|
||||||
|
// would have a frac of 1.0 and an exp of 1024, or equivalently a frac
|
||||||
|
// of 0.5 and an exp of 1025. However, since frac must be smaller than
|
||||||
|
// 1, and exp must be smaller than 1025, either representation overflows
|
||||||
|
// a float64. (Which, in turn, is the reason that math.MaxFloat64 is the
|
||||||
|
// largest possible float64. Q.E.D.) However, the formula for
|
||||||
|
// calculating the upper bound from the idx and schema of the last
|
||||||
|
// bucket results in precisely that. It is either frac=1.0 & exp=1024
|
||||||
|
// (for schema < 0) or frac=0.5 & exp=1025 (for schema >=0). (This is,
|
||||||
|
// by the way, a power of two where the exponent itself is a power of
|
||||||
|
// two, 2¹⁰ in fact, which coinicides with a bucket boundary in all
|
||||||
|
// schemas.) So these are the special cases we have to catch below.
|
||||||
|
if schema < 0 {
|
||||||
|
exp := int(idx) << -schema
|
||||||
|
if exp == 1024 {
|
||||||
|
// This is the last bucket before the overflow bucket
|
||||||
|
// (for ±Inf observations). Return math.MaxFloat64 as
|
||||||
|
// explained above.
|
||||||
|
return math.MaxFloat64
|
||||||
|
}
|
||||||
|
return math.Ldexp(1, exp)
|
||||||
|
}
|
||||||
|
|
||||||
|
fracIdx := idx & ((1 << schema) - 1)
|
||||||
|
frac := exponentialBounds[schema][fracIdx]
|
||||||
|
exp := (int(idx) >> schema) + 1
|
||||||
|
if frac == 0.5 && exp == 1025 {
|
||||||
|
// This is the last bucket before the overflow bucket (for ±Inf
|
||||||
|
// observations). Return math.MaxFloat64 as explained above.
|
||||||
|
return math.MaxFloat64
|
||||||
|
}
|
||||||
|
return math.Ldexp(frac, exp)
|
||||||
|
}
|
||||||
|
|
||||||
|
// exponentialBounds is a precalculated table of bucket bounds in the interval
|
||||||
|
// [0.5,1) in schema 0 to 8.
|
||||||
|
var exponentialBounds = [][]float64{
|
||||||
|
// Schema "0":
|
||||||
|
{0.5},
|
||||||
|
// Schema 1:
|
||||||
|
{0.5, 0.7071067811865475},
|
||||||
|
// Schema 2:
|
||||||
|
{0.5, 0.5946035575013605, 0.7071067811865475, 0.8408964152537144},
|
||||||
|
// Schema 3:
|
||||||
|
{
|
||||||
|
0.5, 0.5452538663326288, 0.5946035575013605, 0.6484197773255048,
|
||||||
|
0.7071067811865475, 0.7711054127039704, 0.8408964152537144, 0.9170040432046711,
|
||||||
|
},
|
||||||
|
// Schema 4:
|
||||||
|
{
|
||||||
|
0.5, 0.5221368912137069, 0.5452538663326288, 0.5693943173783458,
|
||||||
|
0.5946035575013605, 0.620928906036742, 0.6484197773255048, 0.6771277734684463,
|
||||||
|
0.7071067811865475, 0.7384130729697496, 0.7711054127039704, 0.805245165974627,
|
||||||
|
0.8408964152537144, 0.8781260801866495, 0.9170040432046711, 0.9576032806985735,
|
||||||
|
},
|
||||||
|
// Schema 5:
|
||||||
|
{
|
||||||
|
0.5, 0.5109485743270583, 0.5221368912137069, 0.5335702003384117,
|
||||||
|
0.5452538663326288, 0.5571933712979462, 0.5693943173783458, 0.5818624293887887,
|
||||||
|
0.5946035575013605, 0.6076236799902344, 0.620928906036742, 0.6345254785958666,
|
||||||
|
0.6484197773255048, 0.6626183215798706, 0.6771277734684463, 0.6919549409819159,
|
||||||
|
0.7071067811865475, 0.7225904034885232, 0.7384130729697496, 0.7545822137967112,
|
||||||
|
0.7711054127039704, 0.7879904225539431, 0.805245165974627, 0.8228777390769823,
|
||||||
|
0.8408964152537144, 0.8593096490612387, 0.8781260801866495, 0.8973545375015533,
|
||||||
|
0.9170040432046711, 0.9370838170551498, 0.9576032806985735, 0.9785720620876999,
|
||||||
|
},
|
||||||
|
// Schema 6:
|
||||||
|
{
|
||||||
|
0.5, 0.5054446430258502, 0.5109485743270583, 0.5165124395106142,
|
||||||
|
0.5221368912137069, 0.5278225891802786, 0.5335702003384117, 0.5393803988785598,
|
||||||
|
0.5452538663326288, 0.5511912916539204, 0.5571933712979462, 0.5632608093041209,
|
||||||
|
0.5693943173783458, 0.5755946149764913, 0.5818624293887887, 0.5881984958251406,
|
||||||
|
0.5946035575013605, 0.6010783657263515, 0.6076236799902344, 0.6142402680534349,
|
||||||
|
0.620928906036742, 0.6276903785123455, 0.6345254785958666, 0.6414350080393891,
|
||||||
|
0.6484197773255048, 0.6554806057623822, 0.6626183215798706, 0.6698337620266515,
|
||||||
|
0.6771277734684463, 0.6845012114872953, 0.6919549409819159, 0.6994898362691555,
|
||||||
|
0.7071067811865475, 0.7148066691959849, 0.7225904034885232, 0.7304588970903234,
|
||||||
|
0.7384130729697496, 0.7464538641456323, 0.7545822137967112, 0.762799075372269,
|
||||||
|
0.7711054127039704, 0.7795022001189185, 0.7879904225539431, 0.7965710756711334,
|
||||||
|
0.805245165974627, 0.8140137109286738, 0.8228777390769823, 0.8318382901633681,
|
||||||
|
0.8408964152537144, 0.8500531768592616, 0.8593096490612387, 0.8686669176368529,
|
||||||
|
0.8781260801866495, 0.8876882462632604, 0.8973545375015533, 0.9071260877501991,
|
||||||
|
0.9170040432046711, 0.9269895625416926, 0.9370838170551498, 0.9472879907934827,
|
||||||
|
0.9576032806985735, 0.9680308967461471, 0.9785720620876999, 0.9892280131939752,
|
||||||
|
},
|
||||||
|
// Schema 7:
|
||||||
|
{
|
||||||
|
0.5, 0.5027149505564014, 0.5054446430258502, 0.5081891574554764,
|
||||||
|
0.5109485743270583, 0.5137229745593818, 0.5165124395106142, 0.5193170509806894,
|
||||||
|
0.5221368912137069, 0.5249720429003435, 0.5278225891802786, 0.5306886136446309,
|
||||||
|
0.5335702003384117, 0.5364674337629877, 0.5393803988785598, 0.5423091811066545,
|
||||||
|
0.5452538663326288, 0.5482145409081883, 0.5511912916539204, 0.5541842058618393,
|
||||||
|
0.5571933712979462, 0.5602188762048033, 0.5632608093041209, 0.5663192597993595,
|
||||||
|
0.5693943173783458, 0.572486072215902, 0.5755946149764913, 0.5787200368168754,
|
||||||
|
0.5818624293887887, 0.585021884841625, 0.5881984958251406, 0.5913923554921704,
|
||||||
|
0.5946035575013605, 0.5978321960199137, 0.6010783657263515, 0.6043421618132907,
|
||||||
|
0.6076236799902344, 0.6109230164863786, 0.6142402680534349, 0.6175755319684665,
|
||||||
|
0.620928906036742, 0.6243004885946023, 0.6276903785123455, 0.6310986751971253,
|
||||||
|
0.6345254785958666, 0.637970889198196, 0.6414350080393891, 0.6449179367033329,
|
||||||
|
0.6484197773255048, 0.6519406325959679, 0.6554806057623822, 0.659039800633032,
|
||||||
|
0.6626183215798706, 0.6662162735415805, 0.6698337620266515, 0.6734708931164728,
|
||||||
|
0.6771277734684463, 0.6808045103191123, 0.6845012114872953, 0.688217985377265,
|
||||||
|
0.6919549409819159, 0.6957121878859629, 0.6994898362691555, 0.7032879969095076,
|
||||||
|
0.7071067811865475, 0.7109463010845827, 0.7148066691959849, 0.718687998724491,
|
||||||
|
0.7225904034885232, 0.7265139979245261, 0.7304588970903234, 0.7344252166684908,
|
||||||
|
0.7384130729697496, 0.7424225829363761, 0.7464538641456323, 0.7505070348132126,
|
||||||
|
0.7545822137967112, 0.7586795205991071, 0.762799075372269, 0.7669409989204777,
|
||||||
|
0.7711054127039704, 0.7752924388424999, 0.7795022001189185, 0.7837348199827764,
|
||||||
|
0.7879904225539431, 0.7922691326262467, 0.7965710756711334, 0.8008963778413465,
|
||||||
|
0.805245165974627, 0.8096175675974316, 0.8140137109286738, 0.8184337248834821,
|
||||||
|
0.8228777390769823, 0.8273458838280969, 0.8318382901633681, 0.8363550898207981,
|
||||||
|
0.8408964152537144, 0.8454623996346523, 0.8500531768592616, 0.8546688815502312,
|
||||||
|
0.8593096490612387, 0.8639756154809185, 0.8686669176368529, 0.8733836930995842,
|
||||||
|
0.8781260801866495, 0.8828942179666361, 0.8876882462632604, 0.8925083056594671,
|
||||||
|
0.8973545375015533, 0.9022270839033115, 0.9071260877501991, 0.9120516927035263,
|
||||||
|
0.9170040432046711, 0.9219832844793128, 0.9269895625416926, 0.9320230241988943,
|
||||||
|
0.9370838170551498, 0.9421720895161669, 0.9472879907934827, 0.9524316709088368,
|
||||||
|
0.9576032806985735, 0.9628029718180622, 0.9680308967461471, 0.9732872087896164,
|
||||||
|
0.9785720620876999, 0.9838856116165875, 0.9892280131939752, 0.9945994234836328,
|
||||||
|
},
|
||||||
|
// Schema 8:
|
||||||
|
{
|
||||||
|
0.5, 0.5013556375251013, 0.5027149505564014, 0.5040779490592088,
|
||||||
|
0.5054446430258502, 0.5068150424757447, 0.5081891574554764, 0.509566998038869,
|
||||||
|
0.5109485743270583, 0.5123338964485679, 0.5137229745593818, 0.5151158188430205,
|
||||||
|
0.5165124395106142, 0.5179128468009786, 0.5193170509806894, 0.520725062344158,
|
||||||
|
0.5221368912137069, 0.5235525479396449, 0.5249720429003435, 0.526395386502313,
|
||||||
|
0.5278225891802786, 0.5292536613972564, 0.5306886136446309, 0.5321274564422321,
|
||||||
|
0.5335702003384117, 0.5350168559101208, 0.5364674337629877, 0.5379219445313954,
|
||||||
|
0.5393803988785598, 0.5408428074966075, 0.5423091811066545, 0.5437795304588847,
|
||||||
|
0.5452538663326288, 0.5467321995364429, 0.5482145409081883, 0.549700901315111,
|
||||||
|
0.5511912916539204, 0.5526857228508706, 0.5541842058618393, 0.5556867516724088,
|
||||||
|
0.5571933712979462, 0.5587040757836845, 0.5602188762048033, 0.5617377836665098,
|
||||||
|
0.5632608093041209, 0.564787964283144, 0.5663192597993595, 0.5678547070789026,
|
||||||
|
0.5693943173783458, 0.5709381019847808, 0.572486072215902, 0.5740382394200894,
|
||||||
|
0.5755946149764913, 0.5771552102951081, 0.5787200368168754, 0.5802891060137493,
|
||||||
|
0.5818624293887887, 0.5834400184762408, 0.585021884841625, 0.5866080400818185,
|
||||||
|
0.5881984958251406, 0.5897932637314379, 0.5913923554921704, 0.5929957828304968,
|
||||||
|
0.5946035575013605, 0.5962156912915756, 0.5978321960199137, 0.5994530835371903,
|
||||||
|
0.6010783657263515, 0.6027080545025619, 0.6043421618132907, 0.6059806996384005,
|
||||||
|
0.6076236799902344, 0.6092711149137041, 0.6109230164863786, 0.6125793968185725,
|
||||||
|
0.6142402680534349, 0.6159056423670379, 0.6175755319684665, 0.6192499490999082,
|
||||||
|
0.620928906036742, 0.622612415087629, 0.6243004885946023, 0.6259931389331581,
|
||||||
|
0.6276903785123455, 0.6293922197748583, 0.6310986751971253, 0.6328097572894031,
|
||||||
|
0.6345254785958666, 0.6362458516947014, 0.637970889198196, 0.6397006037528346,
|
||||||
|
0.6414350080393891, 0.6431741147730128, 0.6449179367033329, 0.6466664866145447,
|
||||||
|
0.6484197773255048, 0.6501778216898253, 0.6519406325959679, 0.6537082229673385,
|
||||||
|
0.6554806057623822, 0.6572577939746774, 0.659039800633032, 0.6608266388015788,
|
||||||
|
0.6626183215798706, 0.6644148621029772, 0.6662162735415805, 0.6680225691020727,
|
||||||
|
0.6698337620266515, 0.6716498655934177, 0.6734708931164728, 0.6752968579460171,
|
||||||
|
0.6771277734684463, 0.6789636531064505, 0.6808045103191123, 0.6826503586020058,
|
||||||
|
0.6845012114872953, 0.6863570825438342, 0.688217985377265, 0.690083933630119,
|
||||||
|
0.6919549409819159, 0.6938310211492645, 0.6957121878859629, 0.6975984549830999,
|
||||||
|
0.6994898362691555, 0.7013863456101023, 0.7032879969095076, 0.7051948041086352,
|
||||||
|
0.7071067811865475, 0.7090239421602076, 0.7109463010845827, 0.7128738720527471,
|
||||||
|
0.7148066691959849, 0.7167447066838943, 0.718687998724491, 0.7206365595643126,
|
||||||
|
0.7225904034885232, 0.7245495448210174, 0.7265139979245261, 0.7284837772007218,
|
||||||
|
0.7304588970903234, 0.7324393720732029, 0.7344252166684908, 0.7364164454346837,
|
||||||
|
0.7384130729697496, 0.7404151139112358, 0.7424225829363761, 0.7444354947621984,
|
||||||
|
0.7464538641456323, 0.7484777058836176, 0.7505070348132126, 0.7525418658117031,
|
||||||
|
0.7545822137967112, 0.7566280937263048, 0.7586795205991071, 0.7607365094544071,
|
||||||
|
0.762799075372269, 0.7648672334736434, 0.7669409989204777, 0.7690203869158282,
|
||||||
|
0.7711054127039704, 0.7731960915705107, 0.7752924388424999, 0.7773944698885442,
|
||||||
|
0.7795022001189185, 0.7816156449856788, 0.7837348199827764, 0.7858597406461707,
|
||||||
|
0.7879904225539431, 0.7901268813264122, 0.7922691326262467, 0.7944171921585818,
|
||||||
|
0.7965710756711334, 0.7987307989543135, 0.8008963778413465, 0.8030678282083853,
|
||||||
|
0.805245165974627, 0.8074284071024302, 0.8096175675974316, 0.8118126635086642,
|
||||||
|
0.8140137109286738, 0.8162207259936375, 0.8184337248834821, 0.820652723822003,
|
||||||
|
0.8228777390769823, 0.8251087869603088, 0.8273458838280969, 0.8295890460808079,
|
||||||
|
0.8318382901633681, 0.8340936325652911, 0.8363550898207981, 0.8386226785089391,
|
||||||
|
0.8408964152537144, 0.8431763167241966, 0.8454623996346523, 0.8477546807446661,
|
||||||
|
0.8500531768592616, 0.8523579048290255, 0.8546688815502312, 0.8569861239649629,
|
||||||
|
0.8593096490612387, 0.8616394738731368, 0.8639756154809185, 0.8663180910111553,
|
||||||
|
0.8686669176368529, 0.871022112577578, 0.8733836930995842, 0.8757516765159389,
|
||||||
|
0.8781260801866495, 0.8805069215187917, 0.8828942179666361, 0.8852879870317771,
|
||||||
|
0.8876882462632604, 0.890095013257712, 0.8925083056594671, 0.8949281411607002,
|
||||||
|
0.8973545375015533, 0.8997875124702672, 0.9022270839033115, 0.9046732696855155,
|
||||||
|
0.9071260877501991, 0.909585556079304, 0.9120516927035263, 0.9145245157024483,
|
||||||
|
0.9170040432046711, 0.9194902933879467, 0.9219832844793128, 0.9244830347552253,
|
||||||
|
0.9269895625416926, 0.92950288621441, 0.9320230241988943, 0.9345499949706191,
|
||||||
|
0.9370838170551498, 0.93962450902828, 0.9421720895161669, 0.9447265771954693,
|
||||||
|
0.9472879907934827, 0.9498563490882775, 0.9524316709088368, 0.9550139751351947,
|
||||||
|
0.9576032806985735, 0.9601996065815236, 0.9628029718180622, 0.9654133954938133,
|
||||||
|
0.9680308967461471, 0.9706554947643201, 0.9732872087896164, 0.9759260581154889,
|
||||||
|
0.9785720620876999, 0.9812252401044634, 0.9838856116165875, 0.9865531961276168,
|
||||||
|
0.9892280131939752, 0.9919100824251095, 0.9945994234836328, 0.9972960560854698,
|
||||||
|
},
|
||||||
|
}
|
112
model/histogram/generic_test.go
Normal file
112
model/histogram/generic_test.go
Normal file
|
@ -0,0 +1,112 @@
|
||||||
|
// Copyright 2022 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package histogram
|
||||||
|
|
||||||
|
import (
|
||||||
|
"math"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGetBound(t *testing.T) {
|
||||||
|
scenarios := []struct {
|
||||||
|
idx int32
|
||||||
|
schema int32
|
||||||
|
want float64
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
idx: -1,
|
||||||
|
schema: -1,
|
||||||
|
want: 0.25,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 0,
|
||||||
|
schema: -1,
|
||||||
|
want: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 1,
|
||||||
|
schema: -1,
|
||||||
|
want: 4,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 512,
|
||||||
|
schema: -1,
|
||||||
|
want: math.MaxFloat64,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 513,
|
||||||
|
schema: -1,
|
||||||
|
want: math.Inf(+1),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: -1,
|
||||||
|
schema: 0,
|
||||||
|
want: 0.5,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 0,
|
||||||
|
schema: 0,
|
||||||
|
want: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 1,
|
||||||
|
schema: 0,
|
||||||
|
want: 2,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 1024,
|
||||||
|
schema: 0,
|
||||||
|
want: math.MaxFloat64,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 1025,
|
||||||
|
schema: 0,
|
||||||
|
want: math.Inf(+1),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: -1,
|
||||||
|
schema: 2,
|
||||||
|
want: 0.8408964152537144,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 0,
|
||||||
|
schema: 2,
|
||||||
|
want: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 1,
|
||||||
|
schema: 2,
|
||||||
|
want: 1.189207115002721,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 4096,
|
||||||
|
schema: 2,
|
||||||
|
want: math.MaxFloat64,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
idx: 4097,
|
||||||
|
schema: 2,
|
||||||
|
want: math.Inf(+1),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range scenarios {
|
||||||
|
got := getBound(s.idx, s.schema)
|
||||||
|
if s.want != got {
|
||||||
|
require.Equal(t, s.want, got, "idx %d, schema %d", s.idx, s.schema)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
448
model/histogram/histogram.go
Normal file
448
model/histogram/histogram.go
Normal file
|
@ -0,0 +1,448 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package histogram
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Histogram encodes a sparse, high-resolution histogram. See the design
|
||||||
|
// document for full details:
|
||||||
|
// https://docs.google.com/document/d/1cLNv3aufPZb3fNfaJgdaRBZsInZKKIHo9E6HinJVbpM/edit#
|
||||||
|
//
|
||||||
|
// The most tricky bit is how bucket indices represent real bucket boundaries.
|
||||||
|
// An example for schema 0 (by which each bucket is twice as wide as the
|
||||||
|
// previous bucket):
|
||||||
|
//
|
||||||
|
// Bucket boundaries → [-2,-1) [-1,-0.5) [-0.5,-0.25) ... [-0.001,0.001] ... (0.25,0.5] (0.5,1] (1,2] ....
|
||||||
|
// ↑ ↑ ↑ ↑ ↑ ↑ ↑
|
||||||
|
// Zero bucket (width e.g. 0.001) → | | | ZB | | |
|
||||||
|
// Positive bucket indices → | | | ... -1 0 1 2 3
|
||||||
|
// Negative bucket indices → 3 2 1 0 -1 ...
|
||||||
|
//
|
||||||
|
// Which bucket indices are actually used is determined by the spans.
|
||||||
|
type Histogram struct {
|
||||||
|
// Currently valid schema numbers are -4 <= n <= 8. They are all for
|
||||||
|
// base-2 bucket schemas, where 1 is a bucket boundary in each case, and
|
||||||
|
// then each power of two is divided into 2^n logarithmic buckets. Or
|
||||||
|
// in other words, each bucket boundary is the previous boundary times
|
||||||
|
// 2^(2^-n).
|
||||||
|
Schema int32
|
||||||
|
// Width of the zero bucket.
|
||||||
|
ZeroThreshold float64
|
||||||
|
// Observations falling into the zero bucket.
|
||||||
|
ZeroCount uint64
|
||||||
|
// Total number of observations.
|
||||||
|
Count uint64
|
||||||
|
// Sum of observations. This is also used as the stale marker.
|
||||||
|
Sum float64
|
||||||
|
// Spans for positive and negative buckets (see Span below).
|
||||||
|
PositiveSpans, NegativeSpans []Span
|
||||||
|
// Observation counts in buckets. The first element is an absolute
|
||||||
|
// count. All following ones are deltas relative to the previous
|
||||||
|
// element.
|
||||||
|
PositiveBuckets, NegativeBuckets []int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// A Span defines a continuous sequence of buckets.
|
||||||
|
type Span struct {
|
||||||
|
// Gap to previous span (always positive), or starting index for the 1st
|
||||||
|
// span (which can be negative).
|
||||||
|
Offset int32
|
||||||
|
// Length of the span.
|
||||||
|
Length uint32
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy returns a deep copy of the Histogram.
|
||||||
|
func (h *Histogram) Copy() *Histogram {
|
||||||
|
c := *h
|
||||||
|
|
||||||
|
if len(h.PositiveSpans) != 0 {
|
||||||
|
c.PositiveSpans = make([]Span, len(h.PositiveSpans))
|
||||||
|
copy(c.PositiveSpans, h.PositiveSpans)
|
||||||
|
}
|
||||||
|
if len(h.NegativeSpans) != 0 {
|
||||||
|
c.NegativeSpans = make([]Span, len(h.NegativeSpans))
|
||||||
|
copy(c.NegativeSpans, h.NegativeSpans)
|
||||||
|
}
|
||||||
|
if len(h.PositiveBuckets) != 0 {
|
||||||
|
c.PositiveBuckets = make([]int64, len(h.PositiveBuckets))
|
||||||
|
copy(c.PositiveBuckets, h.PositiveBuckets)
|
||||||
|
}
|
||||||
|
if len(h.NegativeBuckets) != 0 {
|
||||||
|
c.NegativeBuckets = make([]int64, len(h.NegativeBuckets))
|
||||||
|
copy(c.NegativeBuckets, h.NegativeBuckets)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &c
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a string representation of the Histogram.
|
||||||
|
func (h *Histogram) String() string {
|
||||||
|
var sb strings.Builder
|
||||||
|
fmt.Fprintf(&sb, "{count:%d, sum:%g", h.Count, h.Sum)
|
||||||
|
|
||||||
|
var nBuckets []Bucket[uint64]
|
||||||
|
for it := h.NegativeBucketIterator(); it.Next(); {
|
||||||
|
bucket := it.At()
|
||||||
|
if bucket.Count != 0 {
|
||||||
|
nBuckets = append(nBuckets, it.At())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i := len(nBuckets) - 1; i >= 0; i-- {
|
||||||
|
fmt.Fprintf(&sb, ", %s", nBuckets[i].String())
|
||||||
|
}
|
||||||
|
|
||||||
|
if h.ZeroCount != 0 {
|
||||||
|
fmt.Fprintf(&sb, ", %s", h.ZeroBucket().String())
|
||||||
|
}
|
||||||
|
|
||||||
|
for it := h.PositiveBucketIterator(); it.Next(); {
|
||||||
|
bucket := it.At()
|
||||||
|
if bucket.Count != 0 {
|
||||||
|
fmt.Fprintf(&sb, ", %s", bucket.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sb.WriteRune('}')
|
||||||
|
return sb.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ZeroBucket returns the zero bucket.
|
||||||
|
func (h *Histogram) ZeroBucket() Bucket[uint64] {
|
||||||
|
return Bucket[uint64]{
|
||||||
|
Lower: -h.ZeroThreshold,
|
||||||
|
Upper: h.ZeroThreshold,
|
||||||
|
LowerInclusive: true,
|
||||||
|
UpperInclusive: true,
|
||||||
|
Count: h.ZeroCount,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// PositiveBucketIterator returns a BucketIterator to iterate over all positive
|
||||||
|
// buckets in ascending order (starting next to the zero bucket and going up).
|
||||||
|
func (h *Histogram) PositiveBucketIterator() BucketIterator[uint64] {
|
||||||
|
return newRegularBucketIterator(h.PositiveSpans, h.PositiveBuckets, h.Schema, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NegativeBucketIterator returns a BucketIterator to iterate over all negative
|
||||||
|
// buckets in descending order (starting next to the zero bucket and going down).
|
||||||
|
func (h *Histogram) NegativeBucketIterator() BucketIterator[uint64] {
|
||||||
|
return newRegularBucketIterator(h.NegativeSpans, h.NegativeBuckets, h.Schema, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CumulativeBucketIterator returns a BucketIterator to iterate over a
|
||||||
|
// cumulative view of the buckets. This method currently only supports
|
||||||
|
// Histograms without negative buckets and panics if the Histogram has negative
|
||||||
|
// buckets. It is currently only used for testing.
|
||||||
|
func (h *Histogram) CumulativeBucketIterator() BucketIterator[uint64] {
|
||||||
|
if len(h.NegativeBuckets) > 0 {
|
||||||
|
panic("CumulativeBucketIterator called on Histogram with negative buckets")
|
||||||
|
}
|
||||||
|
return &cumulativeBucketIterator{h: h, posSpansIdx: -1}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equals returns true if the given histogram matches exactly.
|
||||||
|
// Exact match is when there are no new buckets (even empty) and no missing buckets,
|
||||||
|
// and all the bucket values match. Spans can have different empty length spans in between,
|
||||||
|
// but they must represent the same bucket layout to match.
|
||||||
|
func (h *Histogram) Equals(h2 *Histogram) bool {
|
||||||
|
if h2 == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if h.Schema != h2.Schema || h.ZeroThreshold != h2.ZeroThreshold ||
|
||||||
|
h.ZeroCount != h2.ZeroCount || h.Count != h2.Count || h.Sum != h2.Sum {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if !spansMatch(h.PositiveSpans, h2.PositiveSpans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if !spansMatch(h.NegativeSpans, h2.NegativeSpans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if !bucketsMatch(h.PositiveBuckets, h2.PositiveBuckets) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if !bucketsMatch(h.NegativeBuckets, h2.NegativeBuckets) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// spansMatch returns true if both spans represent the same bucket layout
|
||||||
|
// after combining zero length spans with the next non-zero length span.
|
||||||
|
func spansMatch(s1, s2 []Span) bool {
|
||||||
|
if len(s1) == 0 && len(s2) == 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
s1idx, s2idx := 0, 0
|
||||||
|
for {
|
||||||
|
if s1idx >= len(s1) {
|
||||||
|
return allEmptySpans(s2[s2idx:])
|
||||||
|
}
|
||||||
|
if s2idx >= len(s2) {
|
||||||
|
return allEmptySpans(s1[s1idx:])
|
||||||
|
}
|
||||||
|
|
||||||
|
currS1, currS2 := s1[s1idx], s2[s2idx]
|
||||||
|
s1idx++
|
||||||
|
s2idx++
|
||||||
|
if currS1.Length == 0 {
|
||||||
|
// This span is zero length, so we add consecutive such spans
|
||||||
|
// until we find a non-zero span.
|
||||||
|
for ; s1idx < len(s1) && s1[s1idx].Length == 0; s1idx++ {
|
||||||
|
currS1.Offset += s1[s1idx].Offset
|
||||||
|
}
|
||||||
|
if s1idx < len(s1) {
|
||||||
|
currS1.Offset += s1[s1idx].Offset
|
||||||
|
currS1.Length = s1[s1idx].Length
|
||||||
|
s1idx++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if currS2.Length == 0 {
|
||||||
|
// This span is zero length, so we add consecutive such spans
|
||||||
|
// until we find a non-zero span.
|
||||||
|
for ; s2idx < len(s2) && s2[s2idx].Length == 0; s2idx++ {
|
||||||
|
currS2.Offset += s2[s2idx].Offset
|
||||||
|
}
|
||||||
|
if s2idx < len(s2) {
|
||||||
|
currS2.Offset += s2[s2idx].Offset
|
||||||
|
currS2.Length = s2[s2idx].Length
|
||||||
|
s2idx++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if currS1.Length == 0 && currS2.Length == 0 {
|
||||||
|
// The last spans of both set are zero length. Previous spans match.
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
if currS1.Offset != currS2.Offset || currS1.Length != currS2.Length {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func allEmptySpans(s []Span) bool {
|
||||||
|
for _, ss := range s {
|
||||||
|
if ss.Length > 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func bucketsMatch(b1, b2 []int64) bool {
|
||||||
|
if len(b1) != len(b2) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for i, b := range b1 {
|
||||||
|
if b != b2[i] {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compact works like FloatHistogram.Compact. See there for detailed
|
||||||
|
// explanations.
|
||||||
|
func (h *Histogram) Compact(maxEmptyBuckets int) *Histogram {
|
||||||
|
h.PositiveBuckets, h.PositiveSpans = compactBuckets(
|
||||||
|
h.PositiveBuckets, h.PositiveSpans, maxEmptyBuckets, true,
|
||||||
|
)
|
||||||
|
h.NegativeBuckets, h.NegativeSpans = compactBuckets(
|
||||||
|
h.NegativeBuckets, h.NegativeSpans, maxEmptyBuckets, true,
|
||||||
|
)
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToFloat returns a FloatHistogram representation of the Histogram. It is a
|
||||||
|
// deep copy (e.g. spans are not shared).
|
||||||
|
func (h *Histogram) ToFloat() *FloatHistogram {
|
||||||
|
var (
|
||||||
|
positiveSpans, negativeSpans []Span
|
||||||
|
positiveBuckets, negativeBuckets []float64
|
||||||
|
)
|
||||||
|
if len(h.PositiveSpans) != 0 {
|
||||||
|
positiveSpans = make([]Span, len(h.PositiveSpans))
|
||||||
|
copy(positiveSpans, h.PositiveSpans)
|
||||||
|
}
|
||||||
|
if len(h.NegativeSpans) != 0 {
|
||||||
|
negativeSpans = make([]Span, len(h.NegativeSpans))
|
||||||
|
copy(negativeSpans, h.NegativeSpans)
|
||||||
|
}
|
||||||
|
if len(h.PositiveBuckets) != 0 {
|
||||||
|
positiveBuckets = make([]float64, len(h.PositiveBuckets))
|
||||||
|
var current float64
|
||||||
|
for i, b := range h.PositiveBuckets {
|
||||||
|
current += float64(b)
|
||||||
|
positiveBuckets[i] = current
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(h.NegativeBuckets) != 0 {
|
||||||
|
negativeBuckets = make([]float64, len(h.NegativeBuckets))
|
||||||
|
var current float64
|
||||||
|
for i, b := range h.NegativeBuckets {
|
||||||
|
current += float64(b)
|
||||||
|
negativeBuckets[i] = current
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &FloatHistogram{
|
||||||
|
Schema: h.Schema,
|
||||||
|
ZeroThreshold: h.ZeroThreshold,
|
||||||
|
ZeroCount: float64(h.ZeroCount),
|
||||||
|
Count: float64(h.Count),
|
||||||
|
Sum: h.Sum,
|
||||||
|
PositiveSpans: positiveSpans,
|
||||||
|
NegativeSpans: negativeSpans,
|
||||||
|
PositiveBuckets: positiveBuckets,
|
||||||
|
NegativeBuckets: negativeBuckets,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type regularBucketIterator struct {
|
||||||
|
baseBucketIterator[uint64, int64]
|
||||||
|
}
|
||||||
|
|
||||||
|
func newRegularBucketIterator(spans []Span, buckets []int64, schema int32, positive bool) *regularBucketIterator {
|
||||||
|
i := baseBucketIterator[uint64, int64]{
|
||||||
|
schema: schema,
|
||||||
|
spans: spans,
|
||||||
|
buckets: buckets,
|
||||||
|
positive: positive,
|
||||||
|
}
|
||||||
|
return ®ularBucketIterator{i}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *regularBucketIterator) Next() bool {
|
||||||
|
if r.spansIdx >= len(r.spans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
span := r.spans[r.spansIdx]
|
||||||
|
// Seed currIdx for the first bucket.
|
||||||
|
if r.bucketsIdx == 0 {
|
||||||
|
r.currIdx = span.Offset
|
||||||
|
} else {
|
||||||
|
r.currIdx++
|
||||||
|
}
|
||||||
|
for r.idxInSpan >= span.Length {
|
||||||
|
// We have exhausted the current span and have to find a new
|
||||||
|
// one. We'll even handle pathologic spans of length 0.
|
||||||
|
r.idxInSpan = 0
|
||||||
|
r.spansIdx++
|
||||||
|
if r.spansIdx >= len(r.spans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
span = r.spans[r.spansIdx]
|
||||||
|
r.currIdx += span.Offset
|
||||||
|
}
|
||||||
|
|
||||||
|
r.currCount += r.buckets[r.bucketsIdx]
|
||||||
|
r.idxInSpan++
|
||||||
|
r.bucketsIdx++
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
type cumulativeBucketIterator struct {
|
||||||
|
h *Histogram
|
||||||
|
|
||||||
|
posSpansIdx int // Index in h.PositiveSpans we are in. -1 means 0 bucket.
|
||||||
|
posBucketsIdx int // Index in h.PositiveBuckets.
|
||||||
|
idxInSpan uint32 // Index in the current span. 0 <= idxInSpan < span.Length.
|
||||||
|
|
||||||
|
initialized bool
|
||||||
|
currIdx int32 // The actual bucket index after decoding from spans.
|
||||||
|
currUpper float64 // The upper boundary of the current bucket.
|
||||||
|
currCount int64 // Current non-cumulative count for the current bucket. Does not apply for empty bucket.
|
||||||
|
currCumulativeCount uint64 // Current "cumulative" count for the current bucket.
|
||||||
|
|
||||||
|
// Between 2 spans there could be some empty buckets which
|
||||||
|
// still needs to be counted for cumulative buckets.
|
||||||
|
// When we hit the end of a span, we use this to iterate
|
||||||
|
// through the empty buckets.
|
||||||
|
emptyBucketCount int32
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cumulativeBucketIterator) Next() bool {
|
||||||
|
if c.posSpansIdx == -1 {
|
||||||
|
// Zero bucket.
|
||||||
|
c.posSpansIdx++
|
||||||
|
if c.h.ZeroCount == 0 {
|
||||||
|
return c.Next()
|
||||||
|
}
|
||||||
|
|
||||||
|
c.currUpper = c.h.ZeroThreshold
|
||||||
|
c.currCount = int64(c.h.ZeroCount)
|
||||||
|
c.currCumulativeCount = uint64(c.currCount)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.posSpansIdx >= len(c.h.PositiveSpans) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.emptyBucketCount > 0 {
|
||||||
|
// We are traversing through empty buckets at the moment.
|
||||||
|
c.currUpper = getBound(c.currIdx, c.h.Schema)
|
||||||
|
c.currIdx++
|
||||||
|
c.emptyBucketCount--
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
span := c.h.PositiveSpans[c.posSpansIdx]
|
||||||
|
if c.posSpansIdx == 0 && !c.initialized {
|
||||||
|
// Initializing.
|
||||||
|
c.currIdx = span.Offset
|
||||||
|
// The first bucket is an absolute value and not a delta with Zero bucket.
|
||||||
|
c.currCount = 0
|
||||||
|
c.initialized = true
|
||||||
|
}
|
||||||
|
|
||||||
|
c.currCount += c.h.PositiveBuckets[c.posBucketsIdx]
|
||||||
|
c.currCumulativeCount += uint64(c.currCount)
|
||||||
|
c.currUpper = getBound(c.currIdx, c.h.Schema)
|
||||||
|
|
||||||
|
c.posBucketsIdx++
|
||||||
|
c.idxInSpan++
|
||||||
|
c.currIdx++
|
||||||
|
if c.idxInSpan >= span.Length {
|
||||||
|
// Move to the next span. This one is done.
|
||||||
|
c.posSpansIdx++
|
||||||
|
c.idxInSpan = 0
|
||||||
|
if c.posSpansIdx < len(c.h.PositiveSpans) {
|
||||||
|
c.emptyBucketCount = c.h.PositiveSpans[c.posSpansIdx].Offset
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cumulativeBucketIterator) At() Bucket[uint64] {
|
||||||
|
return Bucket[uint64]{
|
||||||
|
Upper: c.currUpper,
|
||||||
|
Lower: math.Inf(-1),
|
||||||
|
UpperInclusive: true,
|
||||||
|
LowerInclusive: true,
|
||||||
|
Count: c.currCumulativeCount,
|
||||||
|
Index: c.currIdx - 1,
|
||||||
|
}
|
||||||
|
}
|
782
model/histogram/histogram_test.go
Normal file
782
model/histogram/histogram_test.go
Normal file
|
@ -0,0 +1,782 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package histogram
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestHistogramString(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
histogram Histogram
|
||||||
|
expectedString string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
},
|
||||||
|
expectedString: "{count:0, sum:0}",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
Count: 9,
|
||||||
|
Sum: -3.1415,
|
||||||
|
ZeroCount: 12,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 1},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedString: "{count:9, sum:-3.1415, [-64,-32):1, [-16,-8):1, [-8,-4):2, [-4,-2):1, [-2,-1):3, [-1,-0.5):1, [-0.001,0.001]:12}",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
Count: 19,
|
||||||
|
Sum: 2.7,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 0},
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 0},
|
||||||
|
{Offset: 0, Length: 1},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedString: "{count:19, sum:2.7, [-64,-32):1, [-16,-8):1, [-8,-4):2, [-4,-2):1, [-2,-1):3, [-1,-0.5):1, (0.5,1]:1, (1,2]:3, (2,4]:1, (4,8]:2, (8,16]:1, (16,32]:1, (32,64]:1}",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
|
||||||
|
actualString := c.histogram.String()
|
||||||
|
require.Equal(t, c.expectedString, actualString)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCumulativeBucketIterator(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
histogram Histogram
|
||||||
|
expectedBuckets []Bucket[uint64]
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 2, Count: 3, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 4, Count: 3, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 8, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: 3},
|
||||||
|
{Lower: math.Inf(-1), Upper: 16, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 1},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 2, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 4, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 8, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 3},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 16, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 4},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 32, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 5},
|
||||||
|
{Lower: math.Inf(-1), Upper: 64, Count: 9, LowerInclusive: true, UpperInclusive: true, Index: 6},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 7},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 2, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 4, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 8, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 3},
|
||||||
|
{Lower: math.Inf(-1), Upper: 16, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: math.Inf(-1), Upper: 32, Count: 9, LowerInclusive: true, UpperInclusive: true, Index: 5},
|
||||||
|
{Lower: math.Inf(-1), Upper: 64, Count: 10, LowerInclusive: true, UpperInclusive: true, Index: 6},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 3,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -5, Length: 2}, // -5 -4
|
||||||
|
{Offset: 2, Length: 3}, // -1 0 1
|
||||||
|
{Offset: 2, Length: 2}, // 4 5
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 3},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.6484197773255048, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: -5},
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.7071067811865475, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: -4},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.7711054127039704, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: -3},
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.8408964152537144, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: -2},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.9170040432046711, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1.0905077326652577, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 1.189207115002721, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1.2968395546510096, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 3},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 1.414213562373095, Count: 9, LowerInclusive: true, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1.5422108254079407, Count: 13, LowerInclusive: true, UpperInclusive: true, Index: 5},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: -2,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -2, Length: 4}, // -2 -1 0 1
|
||||||
|
{Offset: 2, Length: 2}, // 4 5
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.00390625, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: -2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.0625, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 16, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 256, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 4096, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 3},
|
||||||
|
|
||||||
|
{Lower: math.Inf(-1), Upper: 65536, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1048576, Count: 9, LowerInclusive: true, UpperInclusive: true, Index: 5},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: -1,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -2, Length: 5}, // -2 -1 0 1 2
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1},
|
||||||
|
},
|
||||||
|
expectedBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.0625, Count: 1, LowerInclusive: true, UpperInclusive: true, Index: -2},
|
||||||
|
{Lower: math.Inf(-1), Upper: 0.25, Count: 4, LowerInclusive: true, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 1, Count: 5, LowerInclusive: true, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: math.Inf(-1), Upper: 4, Count: 7, LowerInclusive: true, UpperInclusive: true, Index: 1},
|
||||||
|
{Lower: math.Inf(-1), Upper: 16, Count: 8, LowerInclusive: true, UpperInclusive: true, Index: 2},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
|
||||||
|
it := c.histogram.CumulativeBucketIterator()
|
||||||
|
actualBuckets := make([]Bucket[uint64], 0, len(c.expectedBuckets))
|
||||||
|
for it.Next() {
|
||||||
|
actualBuckets = append(actualBuckets, it.At())
|
||||||
|
}
|
||||||
|
require.Equal(t, c.expectedBuckets, actualBuckets)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegularBucketIterator(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
histogram Histogram
|
||||||
|
expectedPositiveBuckets []Bucket[uint64]
|
||||||
|
expectedNegativeBuckets []Bucket[uint64]
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: 0.5, Upper: 1, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: 1, Upper: 2, Count: 2, LowerInclusive: false, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: 4, Upper: 8, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 3},
|
||||||
|
{Lower: 8, Upper: 16, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 4},
|
||||||
|
},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 1},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: -1, Upper: -0.5, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 0},
|
||||||
|
{Lower: -2, Upper: -1, Count: 3, LowerInclusive: true, UpperInclusive: false, Index: 1},
|
||||||
|
{Lower: -4, Upper: -2, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 2},
|
||||||
|
{Lower: -8, Upper: -4, Count: 2, LowerInclusive: true, UpperInclusive: false, Index: 3},
|
||||||
|
{Lower: -16, Upper: -8, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 4},
|
||||||
|
|
||||||
|
{Lower: -64, Upper: -32, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 6},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 0},
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 0},
|
||||||
|
{Offset: 0, Length: 1},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: 0.5, Upper: 1, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: 1, Upper: 2, Count: 3, LowerInclusive: false, UpperInclusive: true, Index: 1},
|
||||||
|
{Lower: 2, Upper: 4, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 2},
|
||||||
|
{Lower: 4, Upper: 8, Count: 2, LowerInclusive: false, UpperInclusive: true, Index: 3},
|
||||||
|
{Lower: 8, Upper: 16, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: 16, Upper: 32, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 5},
|
||||||
|
{Lower: 32, Upper: 64, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 6},
|
||||||
|
},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: -1, Upper: -0.5, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 0},
|
||||||
|
{Lower: -2, Upper: -1, Count: 3, LowerInclusive: true, UpperInclusive: false, Index: 1},
|
||||||
|
{Lower: -4, Upper: -2, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 2},
|
||||||
|
{Lower: -8, Upper: -4, Count: 2, LowerInclusive: true, UpperInclusive: false, Index: 3},
|
||||||
|
{Lower: -16, Upper: -8, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 4},
|
||||||
|
|
||||||
|
{Lower: -64, Upper: -32, Count: 1, LowerInclusive: true, UpperInclusive: false, Index: 6},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: 3,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -5, Length: 2}, // -5 -4
|
||||||
|
{Offset: 2, Length: 3}, // -1 0 1
|
||||||
|
{Offset: 2, Length: 2}, // 4 5
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 3},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: 0.5946035575013605, Upper: 0.6484197773255048, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: -5},
|
||||||
|
{Lower: 0.6484197773255048, Upper: 0.7071067811865475, Count: 3, LowerInclusive: false, UpperInclusive: true, Index: -4},
|
||||||
|
|
||||||
|
{Lower: 0.8408964152537144, Upper: 0.9170040432046711, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: 0.9170040432046711, Upper: 1, Count: 2, LowerInclusive: false, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: 1, Upper: 1.0905077326652577, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: 1.2968395546510096, Upper: 1.414213562373095, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: 1.414213562373095, Upper: 1.5422108254079407, Count: 4, LowerInclusive: false, UpperInclusive: true, Index: 5},
|
||||||
|
},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: -2,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -2, Length: 4}, // -2 -1 0 1
|
||||||
|
{Offset: 2, Length: 2}, // 4 5
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: 0.000244140625, Upper: 0.00390625, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: -2},
|
||||||
|
{Lower: 0.00390625, Upper: 0.0625, Count: 3, LowerInclusive: false, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: 0.0625, Upper: 1, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: 1, Upper: 16, Count: 2, LowerInclusive: false, UpperInclusive: true, Index: 1},
|
||||||
|
|
||||||
|
{Lower: 4096, Upper: 65536, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 4},
|
||||||
|
{Lower: 65536, Upper: 1048576, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 5},
|
||||||
|
},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
histogram: Histogram{
|
||||||
|
Schema: -1,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: -2, Length: 5}, // -2 -1 0 1 2
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1},
|
||||||
|
},
|
||||||
|
expectedPositiveBuckets: []Bucket[uint64]{
|
||||||
|
{Lower: 0.015625, Upper: 0.0625, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: -2},
|
||||||
|
{Lower: 0.0625, Upper: 0.25, Count: 3, LowerInclusive: false, UpperInclusive: true, Index: -1},
|
||||||
|
{Lower: 0.25, Upper: 1, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 0},
|
||||||
|
{Lower: 1, Upper: 4, Count: 2, LowerInclusive: false, UpperInclusive: true, Index: 1},
|
||||||
|
{Lower: 4, Upper: 16, Count: 1, LowerInclusive: false, UpperInclusive: true, Index: 2},
|
||||||
|
},
|
||||||
|
expectedNegativeBuckets: []Bucket[uint64]{},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
|
||||||
|
it := c.histogram.PositiveBucketIterator()
|
||||||
|
actualPositiveBuckets := make([]Bucket[uint64], 0, len(c.expectedPositiveBuckets))
|
||||||
|
for it.Next() {
|
||||||
|
actualPositiveBuckets = append(actualPositiveBuckets, it.At())
|
||||||
|
}
|
||||||
|
require.Equal(t, c.expectedPositiveBuckets, actualPositiveBuckets)
|
||||||
|
it = c.histogram.NegativeBucketIterator()
|
||||||
|
actualNegativeBuckets := make([]Bucket[uint64], 0, len(c.expectedNegativeBuckets))
|
||||||
|
for it.Next() {
|
||||||
|
actualNegativeBuckets = append(actualNegativeBuckets, it.At())
|
||||||
|
}
|
||||||
|
require.Equal(t, c.expectedNegativeBuckets, actualNegativeBuckets)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHistogramToFloat(t *testing.T) {
|
||||||
|
h := Histogram{
|
||||||
|
Schema: 3,
|
||||||
|
Count: 61,
|
||||||
|
Sum: 2.7,
|
||||||
|
ZeroThreshold: 0.1,
|
||||||
|
ZeroCount: 42,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 0},
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 5},
|
||||||
|
{Offset: 1, Length: 0},
|
||||||
|
{Offset: 0, Length: 1},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0},
|
||||||
|
}
|
||||||
|
fh := h.ToFloat()
|
||||||
|
|
||||||
|
require.Equal(t, h.String(), fh.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHistogramMatches(t *testing.T) {
|
||||||
|
h1 := Histogram{
|
||||||
|
Schema: 3,
|
||||||
|
Count: 61,
|
||||||
|
Sum: 2.7,
|
||||||
|
ZeroThreshold: 0.1,
|
||||||
|
ZeroCount: 42,
|
||||||
|
PositiveSpans: []Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 10, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 10, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
}
|
||||||
|
|
||||||
|
h2 := h1.Copy()
|
||||||
|
require.True(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
// Changed spans but same layout.
|
||||||
|
h2.PositiveSpans = append(h2.PositiveSpans, Span{Offset: 5})
|
||||||
|
h2.NegativeSpans = append(h2.NegativeSpans, Span{Offset: 2})
|
||||||
|
require.True(t, h1.Equals(h2))
|
||||||
|
require.True(t, h2.Equals(&h1))
|
||||||
|
// Adding empty spans in between.
|
||||||
|
h2.PositiveSpans[1].Offset = 6
|
||||||
|
h2.PositiveSpans = []Span{
|
||||||
|
h2.PositiveSpans[0],
|
||||||
|
{Offset: 1},
|
||||||
|
{Offset: 3},
|
||||||
|
h2.PositiveSpans[1],
|
||||||
|
h2.PositiveSpans[2],
|
||||||
|
}
|
||||||
|
h2.NegativeSpans[1].Offset = 5
|
||||||
|
h2.NegativeSpans = []Span{
|
||||||
|
h2.NegativeSpans[0],
|
||||||
|
{Offset: 2},
|
||||||
|
{Offset: 3},
|
||||||
|
h2.NegativeSpans[1],
|
||||||
|
h2.NegativeSpans[2],
|
||||||
|
}
|
||||||
|
require.True(t, h1.Equals(h2))
|
||||||
|
require.True(t, h2.Equals(&h1))
|
||||||
|
|
||||||
|
// All mismatches.
|
||||||
|
require.False(t, h1.Equals(nil))
|
||||||
|
|
||||||
|
h2.Schema = 1
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.Count++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.Sum++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.ZeroThreshold++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.ZeroCount++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
// Changing value of buckets.
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.PositiveBuckets[len(h2.PositiveBuckets)-1]++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.NegativeBuckets[len(h2.NegativeBuckets)-1]++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
// Changing bucket layout.
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.PositiveSpans[1].Offset++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.NegativeSpans[1].Offset++
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
// Adding an empty bucket.
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.PositiveSpans[0].Offset--
|
||||||
|
h2.PositiveSpans[0].Length++
|
||||||
|
h2.PositiveBuckets = append([]int64{0}, h2.PositiveBuckets...)
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.NegativeSpans[0].Offset--
|
||||||
|
h2.NegativeSpans[0].Length++
|
||||||
|
h2.NegativeBuckets = append([]int64{0}, h2.NegativeBuckets...)
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
|
||||||
|
// Adding new bucket.
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.PositiveSpans = append(h2.PositiveSpans, Span{
|
||||||
|
Offset: 1,
|
||||||
|
Length: 1,
|
||||||
|
})
|
||||||
|
h2.PositiveBuckets = append(h2.PositiveBuckets, 1)
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
h2 = h1.Copy()
|
||||||
|
h2.NegativeSpans = append(h2.NegativeSpans, Span{
|
||||||
|
Offset: 1,
|
||||||
|
Length: 1,
|
||||||
|
})
|
||||||
|
h2.NegativeBuckets = append(h2.NegativeBuckets, 1)
|
||||||
|
require.False(t, h1.Equals(h2))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHistogramCompact(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
in *Histogram
|
||||||
|
maxEmptyBuckets int
|
||||||
|
expected *Histogram
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
"empty histogram",
|
||||||
|
&Histogram{},
|
||||||
|
0,
|
||||||
|
&Histogram{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"nothing should happen",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {2, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42},
|
||||||
|
NegativeSpans: []Span{{3, 2}, {3, 2}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {2, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42},
|
||||||
|
NegativeSpans: []Span{{3, 2}, {3, 2}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"eliminate zero offsets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {0, 3}, {0, 1}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {0, 2}, {2, 1}, {0, 1}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000, 3, 4},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 5}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 4}, {2, 2}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"eliminate zero length",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {2, 0}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {0, 0}, {2, 0}, {1, 4}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000, 3, 4},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 4}},
|
||||||
|
NegativeBuckets: []int64{5, 3, 1.234e5, 1000, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"eliminate multiple zero length spans",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {2, 0}, {2, 0}, {2, 0}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {9, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets at start or end",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 4}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 3, 4, -9},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 4}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets at start and end",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 4}, {5, 6}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, 3, -3, 42, 3, -46, 0, 0},
|
||||||
|
NegativeSpans: []Span{{-2, 4}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{0, 0, 5, 3, -4, -2, 3, 4, -9},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 4}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets at start or end of spans, even in the middle",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 6}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, 3, -4, 0, 1, 42, 3, -46, 0, 0},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {2, 6}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -8, 4, -2, 3, 4, -9},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 2}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 4}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets at start or end but merge spans due to maxEmptyBuckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 4}, {5, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, 3, -3, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 3, 4, -9},
|
||||||
|
},
|
||||||
|
10,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 10}},
|
||||||
|
PositiveBuckets: []int64{1, 3, -4, 0, 0, 0, 0, 1, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 9}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -8, 0, 0, 4, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets from the middle of a span",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, -1, 0, 3, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {2, 1}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 2}, {1, 2}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, 1, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut out a span containing only empty buckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 3}, {2, 2}, {3, 4}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, -1, 0, 3, -2, 42, 3},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {7, 4}},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 42, 3},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cut empty buckets from the middle of a span, avoiding some due to maxEmptyBuckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, -1, 0, 3, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
1,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 1}, {2, 1}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"avoiding all cutting of empty buckets from the middle of a chunk due to maxEmptyBuckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, -1, 0, 3, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
2,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 4}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{1, -1, 0, 3, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"everything merged into one span due to maxEmptyBuckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 1, -1, 0, 3, -2, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 2}, {3, 5}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
3,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-2, 10}},
|
||||||
|
PositiveBuckets: []int64{1, -1, 0, 3, -3, 0, 0, 1, 42, 3},
|
||||||
|
NegativeSpans: []Span{{0, 10}},
|
||||||
|
NegativeBuckets: []int64{5, 3, -8, 0, 0, 4, -2, -2, 3, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"only empty buckets and maxEmptyBuckets greater zero",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-4, 6}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 0, 0, 0, 0, 0, 0, 0},
|
||||||
|
NegativeSpans: []Span{{0, 7}},
|
||||||
|
NegativeBuckets: []int64{0, 0, 0, 0, 0, 0, 0},
|
||||||
|
},
|
||||||
|
3,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{},
|
||||||
|
PositiveBuckets: []int64{},
|
||||||
|
NegativeSpans: []Span{},
|
||||||
|
NegativeBuckets: []int64{},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"multiple spans of only empty buckets",
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-10, 2}, {2, 1}, {3, 3}},
|
||||||
|
PositiveBuckets: []int64{0, 0, 0, 0, 2, 3},
|
||||||
|
NegativeSpans: []Span{{-10, 2}, {2, 1}, {3, 3}},
|
||||||
|
NegativeBuckets: []int64{2, 3, -5, 0, 0, 0},
|
||||||
|
},
|
||||||
|
0,
|
||||||
|
&Histogram{
|
||||||
|
PositiveSpans: []Span{{-1, 2}},
|
||||||
|
PositiveBuckets: []int64{2, 3},
|
||||||
|
NegativeSpans: []Span{{-10, 2}},
|
||||||
|
NegativeBuckets: []int64{2, 3},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, c := range cases {
|
||||||
|
t.Run(c.name, func(t *testing.T) {
|
||||||
|
require.Equal(t, c.expected, c.in.Compact(c.maxEmptyBuckets))
|
||||||
|
// Compact has happened in-place, too.
|
||||||
|
require.Equal(t, c.expected, c.in)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -45,6 +45,10 @@ const (
|
||||||
Keep Action = "keep"
|
Keep Action = "keep"
|
||||||
// Drop drops targets for which the input does match the regex.
|
// Drop drops targets for which the input does match the regex.
|
||||||
Drop Action = "drop"
|
Drop Action = "drop"
|
||||||
|
// KeepEqual drops targets for which the input does not match the target.
|
||||||
|
KeepEqual Action = "keepequal"
|
||||||
|
// Drop drops targets for which the input does match the target.
|
||||||
|
DropEqual Action = "dropequal"
|
||||||
// HashMod sets a label to the modulus of a hash of labels.
|
// HashMod sets a label to the modulus of a hash of labels.
|
||||||
HashMod Action = "hashmod"
|
HashMod Action = "hashmod"
|
||||||
// LabelMap copies labels to other labelnames based on a regex.
|
// LabelMap copies labels to other labelnames based on a regex.
|
||||||
|
@ -66,7 +70,7 @@ func (a *Action) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
switch act := Action(strings.ToLower(s)); act {
|
switch act := Action(strings.ToLower(s)); act {
|
||||||
case Replace, Keep, Drop, HashMod, LabelMap, LabelDrop, LabelKeep, Lowercase, Uppercase:
|
case Replace, Keep, Drop, HashMod, LabelMap, LabelDrop, LabelKeep, Lowercase, Uppercase, KeepEqual, DropEqual:
|
||||||
*a = act
|
*a = act
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -109,13 +113,13 @@ func (c *Config) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
if c.Modulus == 0 && c.Action == HashMod {
|
if c.Modulus == 0 && c.Action == HashMod {
|
||||||
return fmt.Errorf("relabel configuration for hashmod requires non-zero modulus")
|
return fmt.Errorf("relabel configuration for hashmod requires non-zero modulus")
|
||||||
}
|
}
|
||||||
if (c.Action == Replace || c.Action == HashMod || c.Action == Lowercase || c.Action == Uppercase) && c.TargetLabel == "" {
|
if (c.Action == Replace || c.Action == HashMod || c.Action == Lowercase || c.Action == Uppercase || c.Action == KeepEqual || c.Action == DropEqual) && c.TargetLabel == "" {
|
||||||
return fmt.Errorf("relabel configuration for %s action requires 'target_label' value", c.Action)
|
return fmt.Errorf("relabel configuration for %s action requires 'target_label' value", c.Action)
|
||||||
}
|
}
|
||||||
if (c.Action == Replace || c.Action == Lowercase || c.Action == Uppercase) && !relabelTarget.MatchString(c.TargetLabel) {
|
if (c.Action == Replace || c.Action == Lowercase || c.Action == Uppercase || c.Action == KeepEqual || c.Action == DropEqual) && !relabelTarget.MatchString(c.TargetLabel) {
|
||||||
return fmt.Errorf("%q is invalid 'target_label' for %s action", c.TargetLabel, c.Action)
|
return fmt.Errorf("%q is invalid 'target_label' for %s action", c.TargetLabel, c.Action)
|
||||||
}
|
}
|
||||||
if (c.Action == Lowercase || c.Action == Uppercase) && c.Replacement != DefaultRelabelConfig.Replacement {
|
if (c.Action == Lowercase || c.Action == Uppercase || c.Action == KeepEqual || c.Action == DropEqual) && c.Replacement != DefaultRelabelConfig.Replacement {
|
||||||
return fmt.Errorf("'replacement' can not be set for %s action", c.Action)
|
return fmt.Errorf("'replacement' can not be set for %s action", c.Action)
|
||||||
}
|
}
|
||||||
if c.Action == LabelMap && !relabelTarget.MatchString(c.Replacement) {
|
if c.Action == LabelMap && !relabelTarget.MatchString(c.Replacement) {
|
||||||
|
@ -125,6 +129,15 @@ func (c *Config) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||||
return fmt.Errorf("%q is invalid 'target_label' for %s action", c.TargetLabel, c.Action)
|
return fmt.Errorf("%q is invalid 'target_label' for %s action", c.TargetLabel, c.Action)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if c.Action == DropEqual || c.Action == KeepEqual {
|
||||||
|
if c.Regex != DefaultRelabelConfig.Regex ||
|
||||||
|
c.Modulus != DefaultRelabelConfig.Modulus ||
|
||||||
|
c.Separator != DefaultRelabelConfig.Separator ||
|
||||||
|
c.Replacement != DefaultRelabelConfig.Replacement {
|
||||||
|
return fmt.Errorf("%s action requires only 'source_labels' and `target_label`, and no other fields", c.Action)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if c.Action == LabelDrop || c.Action == LabelKeep {
|
if c.Action == LabelDrop || c.Action == LabelKeep {
|
||||||
if c.SourceLabels != nil ||
|
if c.SourceLabels != nil ||
|
||||||
c.TargetLabel != DefaultRelabelConfig.TargetLabel ||
|
c.TargetLabel != DefaultRelabelConfig.TargetLabel ||
|
||||||
|
@ -225,6 +238,14 @@ func relabel(lset labels.Labels, cfg *Config, lb *labels.Builder) labels.Labels
|
||||||
if !cfg.Regex.MatchString(val) {
|
if !cfg.Regex.MatchString(val) {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
case DropEqual:
|
||||||
|
if lset.Get(cfg.TargetLabel) == val {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
case KeepEqual:
|
||||||
|
if lset.Get(cfg.TargetLabel) != val {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
case Replace:
|
case Replace:
|
||||||
indexes := cfg.Regex.FindStringSubmatchIndex(val)
|
indexes := cfg.Regex.FindStringSubmatchIndex(val)
|
||||||
// If there is no match no replacement must take place.
|
// If there is no match no replacement must take place.
|
||||||
|
|
|
@ -451,6 +451,74 @@ func TestRelabel(t *testing.T) {
|
||||||
"foo_uppercase": "BAR123FOO",
|
"foo_uppercase": "BAR123FOO",
|
||||||
}),
|
}),
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
input: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
relabel: []*Config{
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"__tmp_port"},
|
||||||
|
Action: KeepEqual,
|
||||||
|
TargetLabel: "__port1",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
output: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
input: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
relabel: []*Config{
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"__tmp_port"},
|
||||||
|
Action: DropEqual,
|
||||||
|
TargetLabel: "__port1",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
output: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
input: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
relabel: []*Config{
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"__tmp_port"},
|
||||||
|
Action: DropEqual,
|
||||||
|
TargetLabel: "__port2",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
output: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
input: labels.FromMap(map[string]string{
|
||||||
|
"__tmp_port": "1234",
|
||||||
|
"__port1": "1234",
|
||||||
|
"__port2": "5678",
|
||||||
|
}),
|
||||||
|
relabel: []*Config{
|
||||||
|
{
|
||||||
|
SourceLabels: model.LabelNames{"__tmp_port"},
|
||||||
|
Action: KeepEqual,
|
||||||
|
TargetLabel: "__port2",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
output: nil,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
|
|
|
@ -17,16 +17,23 @@ import (
|
||||||
"mime"
|
"mime"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Parser parses samples from a byte slice of samples in the official
|
// Parser parses samples from a byte slice of samples in the official
|
||||||
// Prometheus and OpenMetrics text exposition formats.
|
// Prometheus and OpenMetrics text exposition formats.
|
||||||
type Parser interface {
|
type Parser interface {
|
||||||
// Series returns the bytes of the series, the timestamp if set, and the value
|
// Series returns the bytes of a series with a simple float64 as a
|
||||||
// of the current sample.
|
// value, the timestamp if set, and the value of the current sample.
|
||||||
Series() ([]byte, *int64, float64)
|
Series() ([]byte, *int64, float64)
|
||||||
|
|
||||||
|
// Histogram returns the bytes of a series with a sparse histogram as a
|
||||||
|
// value, the timestamp if set, and the histogram in the current sample.
|
||||||
|
// Depending on the parsed input, the function returns an (integer) Histogram
|
||||||
|
// or a FloatHistogram, with the respective other return value being nil.
|
||||||
|
Histogram() ([]byte, *int64, *histogram.Histogram, *histogram.FloatHistogram)
|
||||||
|
|
||||||
// Help returns the metric name and help text in the current entry.
|
// Help returns the metric name and help text in the current entry.
|
||||||
// Must only be called after Next returned a help entry.
|
// Must only be called after Next returned a help entry.
|
||||||
// The returned byte slices become invalid after the next call to Next.
|
// The returned byte slices become invalid after the next call to Next.
|
||||||
|
@ -70,22 +77,30 @@ func New(b []byte, contentType string) (Parser, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
mediaType, _, err := mime.ParseMediaType(contentType)
|
mediaType, _, err := mime.ParseMediaType(contentType)
|
||||||
if err == nil && mediaType == "application/openmetrics-text" {
|
if err != nil {
|
||||||
return NewOpenMetricsParser(b), nil
|
return NewPromParser(b), err
|
||||||
|
}
|
||||||
|
switch mediaType {
|
||||||
|
case "application/openmetrics-text":
|
||||||
|
return NewOpenMetricsParser(b), nil
|
||||||
|
case "application/vnd.google.protobuf":
|
||||||
|
return NewProtobufParser(b), nil
|
||||||
|
default:
|
||||||
|
return NewPromParser(b), nil
|
||||||
}
|
}
|
||||||
return NewPromParser(b), err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Entry represents the type of a parsed entry.
|
// Entry represents the type of a parsed entry.
|
||||||
type Entry int
|
type Entry int
|
||||||
|
|
||||||
const (
|
const (
|
||||||
EntryInvalid Entry = -1
|
EntryInvalid Entry = -1
|
||||||
EntryType Entry = 0
|
EntryType Entry = 0
|
||||||
EntryHelp Entry = 1
|
EntryHelp Entry = 1
|
||||||
EntrySeries Entry = 2
|
EntrySeries Entry = 2 // A series with a simple float64 as value.
|
||||||
EntryComment Entry = 3
|
EntryComment Entry = 3
|
||||||
EntryUnit Entry = 4
|
EntryUnit Entry = 4
|
||||||
|
EntryHistogram Entry = 5 // A series with a sparse histogram as a value.
|
||||||
)
|
)
|
||||||
|
|
||||||
// MetricType represents metric type values.
|
// MetricType represents metric type values.
|
||||||
|
|
|
@ -27,6 +27,7 @@ import (
|
||||||
"unicode/utf8"
|
"unicode/utf8"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/value"
|
"github.com/prometheus/prometheus/model/value"
|
||||||
)
|
)
|
||||||
|
@ -112,6 +113,12 @@ func (p *OpenMetricsParser) Series() ([]byte, *int64, float64) {
|
||||||
return p.series, nil, p.val
|
return p.series, nil, p.val
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Histogram always returns (nil, nil, nil, nil) because OpenMetrics does not support
|
||||||
|
// sparse histograms.
|
||||||
|
func (p *OpenMetricsParser) Histogram() ([]byte, *int64, *histogram.Histogram, *histogram.FloatHistogram) {
|
||||||
|
return nil, nil, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
// Help returns the metric name and help text in the current entry.
|
// Help returns the metric name and help text in the current entry.
|
||||||
// Must only be called after Next returned a help entry.
|
// Must only be called after Next returned a help entry.
|
||||||
// The returned byte slices become invalid after the next call to Next.
|
// The returned byte slices become invalid after the next call to Next.
|
||||||
|
|
|
@ -237,9 +237,7 @@ foo_total 17.0 1520879607.789 # {xx="yy"} 5`
|
||||||
p.Metric(&res)
|
p.Metric(&res)
|
||||||
found := p.Exemplar(&e)
|
found := p.Exemplar(&e)
|
||||||
require.Equal(t, exp[i].m, string(m))
|
require.Equal(t, exp[i].m, string(m))
|
||||||
if e.HasTs {
|
require.Equal(t, exp[i].t, ts)
|
||||||
require.Equal(t, exp[i].t, ts)
|
|
||||||
}
|
|
||||||
require.Equal(t, exp[i].v, v)
|
require.Equal(t, exp[i].v, v)
|
||||||
require.Equal(t, exp[i].lset, res)
|
require.Equal(t, exp[i].lset, res)
|
||||||
if exp[i].e == nil {
|
if exp[i].e == nil {
|
||||||
|
|
|
@ -28,6 +28,7 @@ import (
|
||||||
"unsafe"
|
"unsafe"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/value"
|
"github.com/prometheus/prometheus/model/value"
|
||||||
)
|
)
|
||||||
|
@ -167,6 +168,12 @@ func (p *PromParser) Series() ([]byte, *int64, float64) {
|
||||||
return p.series, nil, p.val
|
return p.series, nil, p.val
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Histogram always returns (nil, nil, nil, nil) because the Prometheus text format
|
||||||
|
// does not support sparse histograms.
|
||||||
|
func (p *PromParser) Histogram() ([]byte, *int64, *histogram.Histogram, *histogram.FloatHistogram) {
|
||||||
|
return nil, nil, nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
// Help returns the metric name and help text in the current entry.
|
// Help returns the metric name and help text in the current entry.
|
||||||
// Must only be called after Next returned a help entry.
|
// Must only be called after Next returned a help entry.
|
||||||
// The returned byte slices become invalid after the next call to Next.
|
// The returned byte slices become invalid after the next call to Next.
|
||||||
|
|
518
model/textparse/protobufparse.go
Normal file
518
model/textparse/protobufparse.go
Normal file
|
@ -0,0 +1,518 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package textparse
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/binary"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"unicode/utf8"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
|
|
||||||
|
dto "github.com/prometheus/prometheus/prompb/io/prometheus/client"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ProtobufParser is a very inefficient way of unmarshaling the old Prometheus
|
||||||
|
// protobuf format and then present it as it if were parsed by a
|
||||||
|
// Prometheus-2-style text parser. This is only done so that we can easily plug
|
||||||
|
// in the protobuf format into Prometheus 2. For future use (with the final
|
||||||
|
// format that will be used for native histograms), we have to revisit the
|
||||||
|
// parsing. A lot of the efficiency tricks of the Prometheus-2-style parsing
|
||||||
|
// could be used in a similar fashion (byte-slice pointers into the raw
|
||||||
|
// payload), which requires some hand-coded protobuf handling. But the current
|
||||||
|
// parsers all expect the full series name (metric name plus label pairs) as one
|
||||||
|
// string, which is not how things are represented in the protobuf format. If
|
||||||
|
// the re-arrangement work is actually causing problems (which has to be seen),
|
||||||
|
// that expectation needs to be changed.
|
||||||
|
type ProtobufParser struct {
|
||||||
|
in []byte // The intput to parse.
|
||||||
|
inPos int // Position within the input.
|
||||||
|
metricPos int // Position within Metric slice.
|
||||||
|
// fieldPos is the position within a Summary or (legacy) Histogram. -2
|
||||||
|
// is the count. -1 is the sum. Otherwise it is the index within
|
||||||
|
// quantiles/buckets.
|
||||||
|
fieldPos int
|
||||||
|
fieldsDone bool // true if no more fields of a Summary or (legacy) Histogram to be processed.
|
||||||
|
// state is marked by the entry we are processing. EntryInvalid implies
|
||||||
|
// that we have to decode the next MetricFamily.
|
||||||
|
state Entry
|
||||||
|
|
||||||
|
mf *dto.MetricFamily
|
||||||
|
|
||||||
|
// The following are just shenanigans to satisfy the Parser interface.
|
||||||
|
metricBytes *bytes.Buffer // A somewhat fluid representation of the current metric.
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewProtobufParser returns a parser for the payload in the byte slice.
|
||||||
|
func NewProtobufParser(b []byte) Parser {
|
||||||
|
return &ProtobufParser{
|
||||||
|
in: b,
|
||||||
|
state: EntryInvalid,
|
||||||
|
mf: &dto.MetricFamily{},
|
||||||
|
metricBytes: &bytes.Buffer{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Series returns the bytes of a series with a simple float64 as a
|
||||||
|
// value, the timestamp if set, and the value of the current sample.
|
||||||
|
func (p *ProtobufParser) Series() ([]byte, *int64, float64) {
|
||||||
|
var (
|
||||||
|
m = p.mf.GetMetric()[p.metricPos]
|
||||||
|
ts = m.GetTimestampMs()
|
||||||
|
v float64
|
||||||
|
)
|
||||||
|
switch p.mf.GetType() {
|
||||||
|
case dto.MetricType_COUNTER:
|
||||||
|
v = m.GetCounter().GetValue()
|
||||||
|
case dto.MetricType_GAUGE:
|
||||||
|
v = m.GetGauge().GetValue()
|
||||||
|
case dto.MetricType_UNTYPED:
|
||||||
|
v = m.GetUntyped().GetValue()
|
||||||
|
case dto.MetricType_SUMMARY:
|
||||||
|
s := m.GetSummary()
|
||||||
|
switch p.fieldPos {
|
||||||
|
case -2:
|
||||||
|
v = float64(s.GetSampleCount())
|
||||||
|
case -1:
|
||||||
|
v = s.GetSampleSum()
|
||||||
|
// Need to detect a summaries without quantile here.
|
||||||
|
if len(s.GetQuantile()) == 0 {
|
||||||
|
p.fieldsDone = true
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
v = s.GetQuantile()[p.fieldPos].GetValue()
|
||||||
|
}
|
||||||
|
case dto.MetricType_HISTOGRAM:
|
||||||
|
// This should only happen for a legacy histogram.
|
||||||
|
h := m.GetHistogram()
|
||||||
|
switch p.fieldPos {
|
||||||
|
case -2:
|
||||||
|
v = float64(h.GetSampleCount())
|
||||||
|
case -1:
|
||||||
|
v = h.GetSampleSum()
|
||||||
|
default:
|
||||||
|
bb := h.GetBucket()
|
||||||
|
if p.fieldPos >= len(bb) {
|
||||||
|
v = float64(h.GetSampleCount())
|
||||||
|
} else {
|
||||||
|
v = float64(bb[p.fieldPos].GetCumulativeCount())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
panic("encountered unexpected metric type, this is a bug")
|
||||||
|
}
|
||||||
|
if ts != 0 {
|
||||||
|
return p.metricBytes.Bytes(), &ts, v
|
||||||
|
}
|
||||||
|
// Nasty hack: Assume that ts==0 means no timestamp. That's not true in
|
||||||
|
// general, but proto3 has no distinction between unset and
|
||||||
|
// default. Need to avoid in the final format.
|
||||||
|
return p.metricBytes.Bytes(), nil, v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Histogram returns the bytes of a series with a native histogram as a value,
|
||||||
|
// the timestamp if set, and the native histogram in the current sample.
|
||||||
|
//
|
||||||
|
// The Compact method is called before returning the Histogram (or FloatHistogram).
|
||||||
|
//
|
||||||
|
// If the SampleCountFloat or the ZeroCountFloat in the proto message is > 0,
|
||||||
|
// the histogram is parsed and returned as a FloatHistogram and nil is returned
|
||||||
|
// as the (integer) Histogram return value. Otherwise, it is parsed and returned
|
||||||
|
// as an (integer) Histogram and nil is returned as the FloatHistogram return
|
||||||
|
// value.
|
||||||
|
func (p *ProtobufParser) Histogram() ([]byte, *int64, *histogram.Histogram, *histogram.FloatHistogram) {
|
||||||
|
var (
|
||||||
|
m = p.mf.GetMetric()[p.metricPos]
|
||||||
|
ts = m.GetTimestampMs()
|
||||||
|
h = m.GetHistogram()
|
||||||
|
)
|
||||||
|
if h.GetSampleCountFloat() > 0 || h.GetZeroCountFloat() > 0 {
|
||||||
|
// It is a float histogram.
|
||||||
|
fh := histogram.FloatHistogram{
|
||||||
|
Count: h.GetSampleCountFloat(),
|
||||||
|
Sum: h.GetSampleSum(),
|
||||||
|
ZeroThreshold: h.GetZeroThreshold(),
|
||||||
|
ZeroCount: h.GetZeroCountFloat(),
|
||||||
|
Schema: h.GetSchema(),
|
||||||
|
PositiveSpans: make([]histogram.Span, len(h.GetPositiveSpan())),
|
||||||
|
PositiveBuckets: h.GetPositiveCount(),
|
||||||
|
NegativeSpans: make([]histogram.Span, len(h.GetNegativeSpan())),
|
||||||
|
NegativeBuckets: h.GetNegativeCount(),
|
||||||
|
}
|
||||||
|
for i, span := range h.GetPositiveSpan() {
|
||||||
|
fh.PositiveSpans[i].Offset = span.GetOffset()
|
||||||
|
fh.PositiveSpans[i].Length = span.GetLength()
|
||||||
|
}
|
||||||
|
for i, span := range h.GetNegativeSpan() {
|
||||||
|
fh.NegativeSpans[i].Offset = span.GetOffset()
|
||||||
|
fh.NegativeSpans[i].Length = span.GetLength()
|
||||||
|
}
|
||||||
|
fh.Compact(0)
|
||||||
|
if ts != 0 {
|
||||||
|
return p.metricBytes.Bytes(), &ts, nil, &fh
|
||||||
|
}
|
||||||
|
// Nasty hack: Assume that ts==0 means no timestamp. That's not true in
|
||||||
|
// general, but proto3 has no distinction between unset and
|
||||||
|
// default. Need to avoid in the final format.
|
||||||
|
return p.metricBytes.Bytes(), nil, nil, &fh
|
||||||
|
}
|
||||||
|
|
||||||
|
sh := histogram.Histogram{
|
||||||
|
Count: h.GetSampleCount(),
|
||||||
|
Sum: h.GetSampleSum(),
|
||||||
|
ZeroThreshold: h.GetZeroThreshold(),
|
||||||
|
ZeroCount: h.GetZeroCount(),
|
||||||
|
Schema: h.GetSchema(),
|
||||||
|
PositiveSpans: make([]histogram.Span, len(h.GetPositiveSpan())),
|
||||||
|
PositiveBuckets: h.GetPositiveDelta(),
|
||||||
|
NegativeSpans: make([]histogram.Span, len(h.GetNegativeSpan())),
|
||||||
|
NegativeBuckets: h.GetNegativeDelta(),
|
||||||
|
}
|
||||||
|
for i, span := range h.GetPositiveSpan() {
|
||||||
|
sh.PositiveSpans[i].Offset = span.GetOffset()
|
||||||
|
sh.PositiveSpans[i].Length = span.GetLength()
|
||||||
|
}
|
||||||
|
for i, span := range h.GetNegativeSpan() {
|
||||||
|
sh.NegativeSpans[i].Offset = span.GetOffset()
|
||||||
|
sh.NegativeSpans[i].Length = span.GetLength()
|
||||||
|
}
|
||||||
|
sh.Compact(0)
|
||||||
|
if ts != 0 {
|
||||||
|
return p.metricBytes.Bytes(), &ts, &sh, nil
|
||||||
|
}
|
||||||
|
return p.metricBytes.Bytes(), nil, &sh, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Help returns the metric name and help text in the current entry.
|
||||||
|
// Must only be called after Next returned a help entry.
|
||||||
|
// The returned byte slices become invalid after the next call to Next.
|
||||||
|
func (p *ProtobufParser) Help() ([]byte, []byte) {
|
||||||
|
return p.metricBytes.Bytes(), []byte(p.mf.GetHelp())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type returns the metric name and type in the current entry.
|
||||||
|
// Must only be called after Next returned a type entry.
|
||||||
|
// The returned byte slices become invalid after the next call to Next.
|
||||||
|
func (p *ProtobufParser) Type() ([]byte, MetricType) {
|
||||||
|
n := p.metricBytes.Bytes()
|
||||||
|
switch p.mf.GetType() {
|
||||||
|
case dto.MetricType_COUNTER:
|
||||||
|
return n, MetricTypeCounter
|
||||||
|
case dto.MetricType_GAUGE:
|
||||||
|
return n, MetricTypeGauge
|
||||||
|
case dto.MetricType_HISTOGRAM:
|
||||||
|
return n, MetricTypeHistogram
|
||||||
|
case dto.MetricType_SUMMARY:
|
||||||
|
return n, MetricTypeSummary
|
||||||
|
}
|
||||||
|
return n, MetricTypeUnknown
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unit always returns (nil, nil) because units aren't supported by the protobuf
|
||||||
|
// format.
|
||||||
|
func (p *ProtobufParser) Unit() ([]byte, []byte) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Comment always returns nil because comments aren't supported by the protobuf
|
||||||
|
// format.
|
||||||
|
func (p *ProtobufParser) Comment() []byte {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Metric writes the labels of the current sample into the passed labels.
|
||||||
|
// It returns the string from which the metric was parsed.
|
||||||
|
func (p *ProtobufParser) Metric(l *labels.Labels) string {
|
||||||
|
*l = append(*l, labels.Label{
|
||||||
|
Name: labels.MetricName,
|
||||||
|
Value: p.getMagicName(),
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, lp := range p.mf.GetMetric()[p.metricPos].GetLabel() {
|
||||||
|
*l = append(*l, labels.Label{
|
||||||
|
Name: lp.GetName(),
|
||||||
|
Value: lp.GetValue(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if needed, name, value := p.getMagicLabel(); needed {
|
||||||
|
*l = append(*l, labels.Label{Name: name, Value: value})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort labels to maintain the sorted labels invariant.
|
||||||
|
sort.Sort(*l)
|
||||||
|
|
||||||
|
return p.metricBytes.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exemplar writes the exemplar of the current sample into the passed
|
||||||
|
// exemplar. It returns if an exemplar exists or not. In case of a native
|
||||||
|
// histogram, the legacy bucket section is still used for exemplars. To ingest
|
||||||
|
// all examplars, call the Exemplar method repeatedly until it returns false.
|
||||||
|
func (p *ProtobufParser) Exemplar(ex *exemplar.Exemplar) bool {
|
||||||
|
m := p.mf.GetMetric()[p.metricPos]
|
||||||
|
var exProto *dto.Exemplar
|
||||||
|
switch p.mf.GetType() {
|
||||||
|
case dto.MetricType_COUNTER:
|
||||||
|
exProto = m.GetCounter().GetExemplar()
|
||||||
|
case dto.MetricType_HISTOGRAM:
|
||||||
|
bb := m.GetHistogram().GetBucket()
|
||||||
|
if p.fieldPos < 0 {
|
||||||
|
if p.state == EntrySeries {
|
||||||
|
return false // At _count or _sum.
|
||||||
|
}
|
||||||
|
p.fieldPos = 0 // Start at 1st bucket for native histograms.
|
||||||
|
}
|
||||||
|
for p.fieldPos < len(bb) {
|
||||||
|
exProto = bb[p.fieldPos].GetExemplar()
|
||||||
|
if p.state == EntrySeries {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
p.fieldPos++
|
||||||
|
if exProto != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if exProto == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
ex.Value = exProto.GetValue()
|
||||||
|
if ts := exProto.GetTimestamp(); ts != nil {
|
||||||
|
ex.HasTs = true
|
||||||
|
ex.Ts = ts.GetSeconds()*1000 + int64(ts.GetNanos()/1_000_000)
|
||||||
|
}
|
||||||
|
for _, lp := range exProto.GetLabel() {
|
||||||
|
ex.Labels = append(ex.Labels, labels.Label{
|
||||||
|
Name: lp.GetName(),
|
||||||
|
Value: lp.GetValue(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Next advances the parser to the next "sample" (emulating the behavior of a
|
||||||
|
// text format parser). It returns (EntryInvalid, io.EOF) if no samples were
|
||||||
|
// read.
|
||||||
|
func (p *ProtobufParser) Next() (Entry, error) {
|
||||||
|
switch p.state {
|
||||||
|
case EntryInvalid:
|
||||||
|
p.metricPos = 0
|
||||||
|
p.fieldPos = -2
|
||||||
|
n, err := readDelimited(p.in[p.inPos:], p.mf)
|
||||||
|
p.inPos += n
|
||||||
|
if err != nil {
|
||||||
|
return p.state, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip empty metric families.
|
||||||
|
if len(p.mf.GetMetric()) == 0 {
|
||||||
|
return p.Next()
|
||||||
|
}
|
||||||
|
|
||||||
|
// We are at the beginning of a metric family. Put only the name
|
||||||
|
// into metricBytes and validate only name and help for now.
|
||||||
|
name := p.mf.GetName()
|
||||||
|
if !model.IsValidMetricName(model.LabelValue(name)) {
|
||||||
|
return EntryInvalid, errors.Errorf("invalid metric name: %s", name)
|
||||||
|
}
|
||||||
|
if help := p.mf.GetHelp(); !utf8.ValidString(help) {
|
||||||
|
return EntryInvalid, errors.Errorf("invalid help for metric %q: %s", name, help)
|
||||||
|
}
|
||||||
|
p.metricBytes.Reset()
|
||||||
|
p.metricBytes.WriteString(name)
|
||||||
|
|
||||||
|
p.state = EntryHelp
|
||||||
|
case EntryHelp:
|
||||||
|
p.state = EntryType
|
||||||
|
case EntryType:
|
||||||
|
if p.mf.GetType() == dto.MetricType_HISTOGRAM &&
|
||||||
|
isNativeHistogram(p.mf.GetMetric()[0].GetHistogram()) {
|
||||||
|
p.state = EntryHistogram
|
||||||
|
} else {
|
||||||
|
p.state = EntrySeries
|
||||||
|
}
|
||||||
|
if err := p.updateMetricBytes(); err != nil {
|
||||||
|
return EntryInvalid, err
|
||||||
|
}
|
||||||
|
case EntryHistogram, EntrySeries:
|
||||||
|
if p.state == EntrySeries && !p.fieldsDone &&
|
||||||
|
(p.mf.GetType() == dto.MetricType_SUMMARY || p.mf.GetType() == dto.MetricType_HISTOGRAM) {
|
||||||
|
p.fieldPos++
|
||||||
|
} else {
|
||||||
|
p.metricPos++
|
||||||
|
p.fieldPos = -2
|
||||||
|
p.fieldsDone = false
|
||||||
|
}
|
||||||
|
if p.metricPos >= len(p.mf.GetMetric()) {
|
||||||
|
p.state = EntryInvalid
|
||||||
|
return p.Next()
|
||||||
|
}
|
||||||
|
if err := p.updateMetricBytes(); err != nil {
|
||||||
|
return EntryInvalid, err
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return EntryInvalid, errors.Errorf("invalid protobuf parsing state: %d", p.state)
|
||||||
|
}
|
||||||
|
return p.state, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *ProtobufParser) updateMetricBytes() error {
|
||||||
|
b := p.metricBytes
|
||||||
|
b.Reset()
|
||||||
|
b.WriteString(p.getMagicName())
|
||||||
|
for _, lp := range p.mf.GetMetric()[p.metricPos].GetLabel() {
|
||||||
|
b.WriteByte(model.SeparatorByte)
|
||||||
|
n := lp.GetName()
|
||||||
|
if !model.LabelName(n).IsValid() {
|
||||||
|
return errors.Errorf("invalid label name: %s", n)
|
||||||
|
}
|
||||||
|
b.WriteString(n)
|
||||||
|
b.WriteByte(model.SeparatorByte)
|
||||||
|
v := lp.GetValue()
|
||||||
|
if !utf8.ValidString(v) {
|
||||||
|
return errors.Errorf("invalid label value: %s", v)
|
||||||
|
}
|
||||||
|
b.WriteString(v)
|
||||||
|
}
|
||||||
|
if needed, n, v := p.getMagicLabel(); needed {
|
||||||
|
b.WriteByte(model.SeparatorByte)
|
||||||
|
b.WriteString(n)
|
||||||
|
b.WriteByte(model.SeparatorByte)
|
||||||
|
b.WriteString(v)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getMagicName usually just returns p.mf.GetType() but adds a magic suffix
|
||||||
|
// ("_count", "_sum", "_bucket") if needed according to the current parser
|
||||||
|
// state.
|
||||||
|
func (p *ProtobufParser) getMagicName() string {
|
||||||
|
t := p.mf.GetType()
|
||||||
|
if p.state == EntryHistogram || (t != dto.MetricType_HISTOGRAM && t != dto.MetricType_SUMMARY) {
|
||||||
|
return p.mf.GetName()
|
||||||
|
}
|
||||||
|
if p.fieldPos == -2 {
|
||||||
|
return p.mf.GetName() + "_count"
|
||||||
|
}
|
||||||
|
if p.fieldPos == -1 {
|
||||||
|
return p.mf.GetName() + "_sum"
|
||||||
|
}
|
||||||
|
if t == dto.MetricType_HISTOGRAM {
|
||||||
|
return p.mf.GetName() + "_bucket"
|
||||||
|
}
|
||||||
|
return p.mf.GetName()
|
||||||
|
}
|
||||||
|
|
||||||
|
// getMagicLabel returns if a magic label ("quantile" or "le") is needed and, if
|
||||||
|
// so, its name and value. It also sets p.fieldsDone if applicable.
|
||||||
|
func (p *ProtobufParser) getMagicLabel() (bool, string, string) {
|
||||||
|
if p.state == EntryHistogram || p.fieldPos < 0 {
|
||||||
|
return false, "", ""
|
||||||
|
}
|
||||||
|
switch p.mf.GetType() {
|
||||||
|
case dto.MetricType_SUMMARY:
|
||||||
|
qq := p.mf.GetMetric()[p.metricPos].GetSummary().GetQuantile()
|
||||||
|
q := qq[p.fieldPos]
|
||||||
|
p.fieldsDone = p.fieldPos == len(qq)-1
|
||||||
|
return true, model.QuantileLabel, formatOpenMetricsFloat(q.GetQuantile())
|
||||||
|
case dto.MetricType_HISTOGRAM:
|
||||||
|
bb := p.mf.GetMetric()[p.metricPos].GetHistogram().GetBucket()
|
||||||
|
if p.fieldPos >= len(bb) {
|
||||||
|
p.fieldsDone = true
|
||||||
|
return true, model.BucketLabel, "+Inf"
|
||||||
|
}
|
||||||
|
b := bb[p.fieldPos]
|
||||||
|
p.fieldsDone = math.IsInf(b.GetUpperBound(), +1)
|
||||||
|
return true, model.BucketLabel, formatOpenMetricsFloat(b.GetUpperBound())
|
||||||
|
}
|
||||||
|
return false, "", ""
|
||||||
|
}
|
||||||
|
|
||||||
|
var errInvalidVarint = errors.New("protobufparse: invalid varint encountered")
|
||||||
|
|
||||||
|
// readDelimited is essentially doing what the function of the same name in
|
||||||
|
// github.com/matttproud/golang_protobuf_extensions/pbutil is doing, but it is
|
||||||
|
// specific to a MetricFamily, utilizes the more efficient gogo-protobuf
|
||||||
|
// unmarshaling, and acts on a byte slice directly without any additional
|
||||||
|
// staging buffers.
|
||||||
|
func readDelimited(b []byte, mf *dto.MetricFamily) (n int, err error) {
|
||||||
|
if len(b) == 0 {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
messageLength, varIntLength := proto.DecodeVarint(b)
|
||||||
|
if varIntLength == 0 || varIntLength > binary.MaxVarintLen32 {
|
||||||
|
return 0, errInvalidVarint
|
||||||
|
}
|
||||||
|
totalLength := varIntLength + int(messageLength)
|
||||||
|
if totalLength > len(b) {
|
||||||
|
return 0, errors.Errorf("protobufparse: insufficient length of buffer, expected at least %d bytes, got %d bytes", totalLength, len(b))
|
||||||
|
}
|
||||||
|
mf.Reset()
|
||||||
|
return totalLength, mf.Unmarshal(b[varIntLength:totalLength])
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatOpenMetricsFloat works like the usual Go string formatting of a fleat
|
||||||
|
// but appends ".0" if the resulting number would otherwise contain neither a
|
||||||
|
// "." nor an "e".
|
||||||
|
func formatOpenMetricsFloat(f float64) string {
|
||||||
|
// A few common cases hardcoded.
|
||||||
|
switch {
|
||||||
|
case f == 1:
|
||||||
|
return "1.0"
|
||||||
|
case f == 0:
|
||||||
|
return "0.0"
|
||||||
|
case f == -1:
|
||||||
|
return "-1.0"
|
||||||
|
case math.IsNaN(f):
|
||||||
|
return "NaN"
|
||||||
|
case math.IsInf(f, +1):
|
||||||
|
return "+Inf"
|
||||||
|
case math.IsInf(f, -1):
|
||||||
|
return "-Inf"
|
||||||
|
}
|
||||||
|
s := fmt.Sprint(f)
|
||||||
|
if strings.ContainsAny(s, "e.") {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
return s + ".0"
|
||||||
|
}
|
||||||
|
|
||||||
|
// isNativeHistogram returns false iff the provided histograms has no sparse
|
||||||
|
// buckets and a zero threshold of 0 and a zero count of 0. In principle, this
|
||||||
|
// could still be meant to be a native histogram (with a zero threshold of 0 and
|
||||||
|
// no observations yet), but for now, we'll treat this case as a conventional
|
||||||
|
// histogram.
|
||||||
|
//
|
||||||
|
// TODO(beorn7): In the final format, there should be an unambiguous way of
|
||||||
|
// deciding if a histogram should be ingested as a conventional one or a native
|
||||||
|
// one.
|
||||||
|
func isNativeHistogram(h *dto.Histogram) bool {
|
||||||
|
return len(h.GetNegativeDelta()) > 0 ||
|
||||||
|
len(h.GetPositiveDelta()) > 0 ||
|
||||||
|
h.GetZeroCount() > 0 ||
|
||||||
|
h.GetZeroThreshold() > 0
|
||||||
|
}
|
681
model/textparse/protobufparse_test.go
Normal file
681
model/textparse/protobufparse_test.go
Normal file
|
@ -0,0 +1,681 @@
|
||||||
|
// Copyright 2021 The Prometheus Authors
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
package textparse
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/binary"
|
||||||
|
"io"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
|
|
||||||
|
dto "github.com/prometheus/prometheus/prompb/io/prometheus/client"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestProtobufParse(t *testing.T) {
|
||||||
|
textMetricFamilies := []string{
|
||||||
|
`name: "go_build_info"
|
||||||
|
help: "Build information about the main Go module."
|
||||||
|
type: GAUGE
|
||||||
|
metric: <
|
||||||
|
label: <
|
||||||
|
name: "checksum"
|
||||||
|
value: ""
|
||||||
|
>
|
||||||
|
label: <
|
||||||
|
name: "path"
|
||||||
|
value: "github.com/prometheus/client_golang"
|
||||||
|
>
|
||||||
|
label: <
|
||||||
|
name: "version"
|
||||||
|
value: "(devel)"
|
||||||
|
>
|
||||||
|
gauge: <
|
||||||
|
value: 1
|
||||||
|
>
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
`name: "go_memstats_alloc_bytes_total"
|
||||||
|
help: "Total number of bytes allocated, even if freed."
|
||||||
|
type: COUNTER
|
||||||
|
metric: <
|
||||||
|
counter: <
|
||||||
|
value: 1.546544e+06
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "42"
|
||||||
|
>
|
||||||
|
value: 12
|
||||||
|
timestamp: <
|
||||||
|
seconds: 1625851151
|
||||||
|
nanos: 233181499
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
`name: "something_untyped"
|
||||||
|
help: "Just to test the untyped type."
|
||||||
|
type: UNTYPED
|
||||||
|
metric: <
|
||||||
|
untyped: <
|
||||||
|
value: 42
|
||||||
|
>
|
||||||
|
timestamp_ms: 1234567
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
`name: "test_histogram"
|
||||||
|
help: "Test histogram with many buckets removed to keep it manageable in size."
|
||||||
|
type: HISTOGRAM
|
||||||
|
metric: <
|
||||||
|
histogram: <
|
||||||
|
sample_count: 175
|
||||||
|
sample_sum: 0.0008280461746287094
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 2
|
||||||
|
upper_bound: -0.0004899999999999998
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 4
|
||||||
|
upper_bound: -0.0003899999999999998
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "59727"
|
||||||
|
>
|
||||||
|
value: -0.00039
|
||||||
|
timestamp: <
|
||||||
|
seconds: 1625851155
|
||||||
|
nanos: 146848499
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 16
|
||||||
|
upper_bound: -0.0002899999999999998
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "5617"
|
||||||
|
>
|
||||||
|
value: -0.00029
|
||||||
|
>
|
||||||
|
>
|
||||||
|
schema: 3
|
||||||
|
zero_threshold: 2.938735877055719e-39
|
||||||
|
zero_count: 2
|
||||||
|
negative_span: <
|
||||||
|
offset: -162
|
||||||
|
length: 1
|
||||||
|
>
|
||||||
|
negative_span: <
|
||||||
|
offset: 23
|
||||||
|
length: 4
|
||||||
|
>
|
||||||
|
negative_delta: 1
|
||||||
|
negative_delta: 3
|
||||||
|
negative_delta: -2
|
||||||
|
negative_delta: -1
|
||||||
|
negative_delta: 1
|
||||||
|
positive_span: <
|
||||||
|
offset: -161
|
||||||
|
length: 1
|
||||||
|
>
|
||||||
|
positive_span: <
|
||||||
|
offset: 8
|
||||||
|
length: 3
|
||||||
|
>
|
||||||
|
positive_delta: 1
|
||||||
|
positive_delta: 2
|
||||||
|
positive_delta: -1
|
||||||
|
positive_delta: -1
|
||||||
|
>
|
||||||
|
timestamp_ms: 1234568
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
|
||||||
|
`name: "test_float_histogram"
|
||||||
|
help: "Test float histogram with many buckets removed to keep it manageable in size."
|
||||||
|
type: HISTOGRAM
|
||||||
|
metric: <
|
||||||
|
histogram: <
|
||||||
|
sample_count: 175
|
||||||
|
sample_count_float: 175.0
|
||||||
|
sample_sum: 0.0008280461746287094
|
||||||
|
bucket: <
|
||||||
|
cumulative_count_float: 2.0
|
||||||
|
upper_bound: -0.0004899999999999998
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count_float: 4.0
|
||||||
|
upper_bound: -0.0003899999999999998
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "59727"
|
||||||
|
>
|
||||||
|
value: -0.00039
|
||||||
|
timestamp: <
|
||||||
|
seconds: 1625851155
|
||||||
|
nanos: 146848499
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count_float: 16
|
||||||
|
upper_bound: -0.0002899999999999998
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "5617"
|
||||||
|
>
|
||||||
|
value: -0.00029
|
||||||
|
>
|
||||||
|
>
|
||||||
|
schema: 3
|
||||||
|
zero_threshold: 2.938735877055719e-39
|
||||||
|
zero_count_float: 2.0
|
||||||
|
negative_span: <
|
||||||
|
offset: -162
|
||||||
|
length: 1
|
||||||
|
>
|
||||||
|
negative_span: <
|
||||||
|
offset: 23
|
||||||
|
length: 4
|
||||||
|
>
|
||||||
|
negative_count: 1.0
|
||||||
|
negative_count: 3.0
|
||||||
|
negative_count: -2.0
|
||||||
|
negative_count: -1.0
|
||||||
|
negative_count: 1.0
|
||||||
|
positive_span: <
|
||||||
|
offset: -161
|
||||||
|
length: 1
|
||||||
|
>
|
||||||
|
positive_span: <
|
||||||
|
offset: 8
|
||||||
|
length: 3
|
||||||
|
>
|
||||||
|
positive_count: 1.0
|
||||||
|
positive_count: 2.0
|
||||||
|
positive_count: -1.0
|
||||||
|
positive_count: -1.0
|
||||||
|
>
|
||||||
|
timestamp_ms: 1234568
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
`name: "test_histogram2"
|
||||||
|
help: "Similar histogram as before but now without sparse buckets."
|
||||||
|
type: HISTOGRAM
|
||||||
|
metric: <
|
||||||
|
histogram: <
|
||||||
|
sample_count: 175
|
||||||
|
sample_sum: 0.000828
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 2
|
||||||
|
upper_bound: -0.00048
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 4
|
||||||
|
upper_bound: -0.00038
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "59727"
|
||||||
|
>
|
||||||
|
value: -0.00038
|
||||||
|
timestamp: <
|
||||||
|
seconds: 1625851153
|
||||||
|
nanos: 146848499
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
bucket: <
|
||||||
|
cumulative_count: 16
|
||||||
|
upper_bound: 1
|
||||||
|
exemplar: <
|
||||||
|
label: <
|
||||||
|
name: "dummyID"
|
||||||
|
value: "5617"
|
||||||
|
>
|
||||||
|
value: -0.000295
|
||||||
|
>
|
||||||
|
>
|
||||||
|
schema: 0
|
||||||
|
zero_threshold: 0
|
||||||
|
>
|
||||||
|
>
|
||||||
|
|
||||||
|
`,
|
||||||
|
`name: "rpc_durations_seconds"
|
||||||
|
help: "RPC latency distributions."
|
||||||
|
type: SUMMARY
|
||||||
|
metric: <
|
||||||
|
label: <
|
||||||
|
name: "service"
|
||||||
|
value: "exponential"
|
||||||
|
>
|
||||||
|
summary: <
|
||||||
|
sample_count: 262
|
||||||
|
sample_sum: 0.00025551262820703587
|
||||||
|
quantile: <
|
||||||
|
quantile: 0.5
|
||||||
|
value: 6.442786329648548e-07
|
||||||
|
>
|
||||||
|
quantile: <
|
||||||
|
quantile: 0.9
|
||||||
|
value: 1.9435742936658396e-06
|
||||||
|
>
|
||||||
|
quantile: <
|
||||||
|
quantile: 0.99
|
||||||
|
value: 4.0471608667037015e-06
|
||||||
|
>
|
||||||
|
>
|
||||||
|
>
|
||||||
|
`,
|
||||||
|
`name: "without_quantiles"
|
||||||
|
help: "A summary without quantiles."
|
||||||
|
type: SUMMARY
|
||||||
|
metric: <
|
||||||
|
summary: <
|
||||||
|
sample_count: 42
|
||||||
|
sample_sum: 1.234
|
||||||
|
>
|
||||||
|
>
|
||||||
|
`,
|
||||||
|
}
|
||||||
|
|
||||||
|
varintBuf := make([]byte, binary.MaxVarintLen32)
|
||||||
|
inputBuf := &bytes.Buffer{}
|
||||||
|
|
||||||
|
for _, tmf := range textMetricFamilies {
|
||||||
|
pb := &dto.MetricFamily{}
|
||||||
|
// From text to proto message.
|
||||||
|
require.NoError(t, proto.UnmarshalText(tmf, pb))
|
||||||
|
// From proto message to binary protobuf.
|
||||||
|
protoBuf, err := proto.Marshal(pb)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Write first length, then binary protobuf.
|
||||||
|
varintLength := binary.PutUvarint(varintBuf, uint64(len(protoBuf)))
|
||||||
|
inputBuf.Write(varintBuf[:varintLength])
|
||||||
|
inputBuf.Write(protoBuf)
|
||||||
|
}
|
||||||
|
|
||||||
|
exp := []struct {
|
||||||
|
lset labels.Labels
|
||||||
|
m string
|
||||||
|
t int64
|
||||||
|
v float64
|
||||||
|
typ MetricType
|
||||||
|
help string
|
||||||
|
unit string
|
||||||
|
comment string
|
||||||
|
shs *histogram.Histogram
|
||||||
|
fhs *histogram.FloatHistogram
|
||||||
|
e []exemplar.Exemplar
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
m: "go_build_info",
|
||||||
|
help: "Build information about the main Go module.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "go_build_info",
|
||||||
|
typ: MetricTypeGauge,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "go_build_info\xFFchecksum\xFF\xFFpath\xFFgithub.com/prometheus/client_golang\xFFversion\xFF(devel)",
|
||||||
|
v: 1,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "go_build_info",
|
||||||
|
"checksum", "",
|
||||||
|
"path", "github.com/prometheus/client_golang",
|
||||||
|
"version", "(devel)",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "go_memstats_alloc_bytes_total",
|
||||||
|
help: "Total number of bytes allocated, even if freed.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "go_memstats_alloc_bytes_total",
|
||||||
|
typ: MetricTypeCounter,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "go_memstats_alloc_bytes_total",
|
||||||
|
v: 1.546544e+06,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "go_memstats_alloc_bytes_total",
|
||||||
|
),
|
||||||
|
e: []exemplar.Exemplar{
|
||||||
|
{Labels: labels.FromStrings("dummyID", "42"), Value: 12, HasTs: true, Ts: 1625851151233},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "something_untyped",
|
||||||
|
help: "Just to test the untyped type.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "something_untyped",
|
||||||
|
typ: MetricTypeUnknown,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "something_untyped",
|
||||||
|
t: 1234567,
|
||||||
|
v: 42,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "something_untyped",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram",
|
||||||
|
help: "Test histogram with many buckets removed to keep it manageable in size.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram",
|
||||||
|
typ: MetricTypeHistogram,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram",
|
||||||
|
t: 1234568,
|
||||||
|
shs: &histogram.Histogram{
|
||||||
|
Count: 175,
|
||||||
|
ZeroCount: 2,
|
||||||
|
Sum: 0.0008280461746287094,
|
||||||
|
ZeroThreshold: 2.938735877055719e-39,
|
||||||
|
Schema: 3,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: -161, Length: 1},
|
||||||
|
{Offset: 8, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: -162, Length: 1},
|
||||||
|
{Offset: 23, Length: 4},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -1, -1},
|
||||||
|
NegativeBuckets: []int64{1, 3, -2, -1, 1},
|
||||||
|
},
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram",
|
||||||
|
),
|
||||||
|
e: []exemplar.Exemplar{
|
||||||
|
{Labels: labels.FromStrings("dummyID", "59727"), Value: -0.00039, HasTs: true, Ts: 1625851155146},
|
||||||
|
{Labels: labels.FromStrings("dummyID", "5617"), Value: -0.00029, HasTs: false},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_float_histogram",
|
||||||
|
help: "Test float histogram with many buckets removed to keep it manageable in size.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_float_histogram",
|
||||||
|
typ: MetricTypeHistogram,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_float_histogram",
|
||||||
|
t: 1234568,
|
||||||
|
fhs: &histogram.FloatHistogram{
|
||||||
|
Count: 175.0,
|
||||||
|
ZeroCount: 2.0,
|
||||||
|
Sum: 0.0008280461746287094,
|
||||||
|
ZeroThreshold: 2.938735877055719e-39,
|
||||||
|
Schema: 3,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: -161, Length: 1},
|
||||||
|
{Offset: 8, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: -162, Length: 1},
|
||||||
|
{Offset: 23, Length: 4},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []float64{1.0, 2.0, -1.0, -1.0},
|
||||||
|
NegativeBuckets: []float64{1.0, 3.0, -2.0, -1.0, 1.0},
|
||||||
|
},
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_float_histogram",
|
||||||
|
),
|
||||||
|
e: []exemplar.Exemplar{
|
||||||
|
{Labels: labels.FromStrings("dummyID", "59727"), Value: -0.00039, HasTs: true, Ts: 1625851155146},
|
||||||
|
{Labels: labels.FromStrings("dummyID", "5617"), Value: -0.00029, HasTs: false},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2",
|
||||||
|
help: "Similar histogram as before but now without sparse buckets.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2",
|
||||||
|
typ: MetricTypeHistogram,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_count",
|
||||||
|
v: 175,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_count",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_sum",
|
||||||
|
v: 0.000828,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_sum",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_bucket\xffle\xff-0.00048",
|
||||||
|
v: 2,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_bucket",
|
||||||
|
"le", "-0.00048",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_bucket\xffle\xff-0.00038",
|
||||||
|
v: 4,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_bucket",
|
||||||
|
"le", "-0.00038",
|
||||||
|
),
|
||||||
|
e: []exemplar.Exemplar{
|
||||||
|
{Labels: labels.FromStrings("dummyID", "59727"), Value: -0.00038, HasTs: true, Ts: 1625851153146},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_bucket\xffle\xff1.0",
|
||||||
|
v: 16,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_bucket",
|
||||||
|
"le", "1.0",
|
||||||
|
),
|
||||||
|
e: []exemplar.Exemplar{
|
||||||
|
{Labels: labels.FromStrings("dummyID", "5617"), Value: -0.000295, HasTs: false},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "test_histogram2_bucket\xffle\xff+Inf",
|
||||||
|
v: 175,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "test_histogram2_bucket",
|
||||||
|
"le", "+Inf",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds",
|
||||||
|
help: "RPC latency distributions.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds",
|
||||||
|
typ: MetricTypeSummary,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds_count\xffservice\xffexponential",
|
||||||
|
v: 262,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "rpc_durations_seconds_count",
|
||||||
|
"service", "exponential",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds_sum\xffservice\xffexponential",
|
||||||
|
v: 0.00025551262820703587,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "rpc_durations_seconds_sum",
|
||||||
|
"service", "exponential",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds\xffservice\xffexponential\xffquantile\xff0.5",
|
||||||
|
v: 6.442786329648548e-07,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "rpc_durations_seconds",
|
||||||
|
"quantile", "0.5",
|
||||||
|
"service", "exponential",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds\xffservice\xffexponential\xffquantile\xff0.9",
|
||||||
|
v: 1.9435742936658396e-06,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "rpc_durations_seconds",
|
||||||
|
"quantile", "0.9",
|
||||||
|
"service", "exponential",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "rpc_durations_seconds\xffservice\xffexponential\xffquantile\xff0.99",
|
||||||
|
v: 4.0471608667037015e-06,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "rpc_durations_seconds",
|
||||||
|
"quantile", "0.99",
|
||||||
|
"service", "exponential",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "without_quantiles",
|
||||||
|
help: "A summary without quantiles.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "without_quantiles",
|
||||||
|
typ: MetricTypeSummary,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "without_quantiles_count",
|
||||||
|
v: 42,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "without_quantiles_count",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
m: "without_quantiles_sum",
|
||||||
|
v: 1.234,
|
||||||
|
lset: labels.FromStrings(
|
||||||
|
"__name__", "without_quantiles_sum",
|
||||||
|
),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
p := NewProtobufParser(inputBuf.Bytes())
|
||||||
|
i := 0
|
||||||
|
|
||||||
|
var res labels.Labels
|
||||||
|
|
||||||
|
for {
|
||||||
|
et, err := p.Next()
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
switch et {
|
||||||
|
case EntrySeries:
|
||||||
|
m, ts, v := p.Series()
|
||||||
|
|
||||||
|
var e exemplar.Exemplar
|
||||||
|
p.Metric(&res)
|
||||||
|
found := p.Exemplar(&e)
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
if ts != nil {
|
||||||
|
require.Equal(t, exp[i].t, *ts)
|
||||||
|
} else {
|
||||||
|
require.Equal(t, exp[i].t, int64(0))
|
||||||
|
}
|
||||||
|
require.Equal(t, exp[i].v, v)
|
||||||
|
require.Equal(t, exp[i].lset, res)
|
||||||
|
if len(exp[i].e) == 0 {
|
||||||
|
require.Equal(t, false, found)
|
||||||
|
} else {
|
||||||
|
require.Equal(t, true, found)
|
||||||
|
require.Equal(t, exp[i].e[0], e)
|
||||||
|
}
|
||||||
|
res = res[:0]
|
||||||
|
|
||||||
|
case EntryHistogram:
|
||||||
|
m, ts, shs, fhs := p.Histogram()
|
||||||
|
p.Metric(&res)
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
if ts != nil {
|
||||||
|
require.Equal(t, exp[i].t, *ts)
|
||||||
|
} else {
|
||||||
|
require.Equal(t, exp[i].t, int64(0))
|
||||||
|
}
|
||||||
|
require.Equal(t, exp[i].lset, res)
|
||||||
|
res = res[:0]
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
if shs != nil {
|
||||||
|
require.Equal(t, exp[i].shs, shs)
|
||||||
|
} else {
|
||||||
|
require.Equal(t, exp[i].fhs, fhs)
|
||||||
|
}
|
||||||
|
j := 0
|
||||||
|
for e := (exemplar.Exemplar{}); p.Exemplar(&e); j++ {
|
||||||
|
require.Equal(t, exp[i].e[j], e)
|
||||||
|
e = exemplar.Exemplar{}
|
||||||
|
}
|
||||||
|
require.Equal(t, len(exp[i].e), j, "not enough exemplars found")
|
||||||
|
|
||||||
|
case EntryType:
|
||||||
|
m, typ := p.Type()
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
require.Equal(t, exp[i].typ, typ)
|
||||||
|
|
||||||
|
case EntryHelp:
|
||||||
|
m, h := p.Help()
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
require.Equal(t, exp[i].help, string(h))
|
||||||
|
|
||||||
|
case EntryUnit:
|
||||||
|
m, u := p.Unit()
|
||||||
|
require.Equal(t, exp[i].m, string(m))
|
||||||
|
require.Equal(t, exp[i].unit, string(u))
|
||||||
|
|
||||||
|
case EntryComment:
|
||||||
|
require.Equal(t, exp[i].comment, string(p.Comment()))
|
||||||
|
}
|
||||||
|
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
require.Equal(t, len(exp), i)
|
||||||
|
}
|
|
@ -13,6 +13,7 @@
|
||||||
- github.com/prometheus/prometheus/discovery/moby
|
- github.com/prometheus/prometheus/discovery/moby
|
||||||
- github.com/prometheus/prometheus/discovery/nomad
|
- github.com/prometheus/prometheus/discovery/nomad
|
||||||
- github.com/prometheus/prometheus/discovery/openstack
|
- github.com/prometheus/prometheus/discovery/openstack
|
||||||
|
- github.com/prometheus/prometheus/discovery/ovhcloud
|
||||||
- github.com/prometheus/prometheus/discovery/puppetdb
|
- github.com/prometheus/prometheus/discovery/puppetdb
|
||||||
- github.com/prometheus/prometheus/discovery/scaleway
|
- github.com/prometheus/prometheus/discovery/scaleway
|
||||||
- github.com/prometheus/prometheus/discovery/triton
|
- github.com/prometheus/prometheus/discovery/triton
|
||||||
|
|
|
@ -61,6 +61,9 @@ import (
|
||||||
// Register openstack plugin.
|
// Register openstack plugin.
|
||||||
_ "github.com/prometheus/prometheus/discovery/openstack"
|
_ "github.com/prometheus/prometheus/discovery/openstack"
|
||||||
|
|
||||||
|
// Register ovhcloud plugin.
|
||||||
|
_ "github.com/prometheus/prometheus/discovery/ovhcloud"
|
||||||
|
|
||||||
// Register puppetdb plugin.
|
// Register puppetdb plugin.
|
||||||
_ "github.com/prometheus/prometheus/discovery/puppetdb"
|
_ "github.com/prometheus/prometheus/discovery/puppetdb"
|
||||||
|
|
||||||
|
|
|
@ -5,14 +5,17 @@ lint:
|
||||||
ENUM_VALUE_PREFIX:
|
ENUM_VALUE_PREFIX:
|
||||||
- remote.proto
|
- remote.proto
|
||||||
- types.proto
|
- types.proto
|
||||||
|
- io/prometheus/client/metrics.proto
|
||||||
ENUM_ZERO_VALUE_SUFFIX:
|
ENUM_ZERO_VALUE_SUFFIX:
|
||||||
- remote.proto
|
- remote.proto
|
||||||
- types.proto
|
- types.proto
|
||||||
|
- io/prometheus/client/metrics.proto
|
||||||
PACKAGE_DIRECTORY_MATCH:
|
PACKAGE_DIRECTORY_MATCH:
|
||||||
- remote.proto
|
- remote.proto
|
||||||
- types.proto
|
- types.proto
|
||||||
PACKAGE_VERSION_SUFFIX:
|
PACKAGE_VERSION_SUFFIX:
|
||||||
- remote.proto
|
- remote.proto
|
||||||
- types.proto
|
- types.proto
|
||||||
|
- io/prometheus/client/metrics.proto
|
||||||
deps:
|
deps:
|
||||||
- buf.build/gogo/protobuf
|
- buf.build/gogo/protobuf
|
||||||
|
|
|
@ -13,5 +13,22 @@
|
||||||
|
|
||||||
package prompb
|
package prompb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
func (m Sample) T() int64 { return m.Timestamp }
|
func (m Sample) T() int64 { return m.Timestamp }
|
||||||
func (m Sample) V() float64 { return m.Value }
|
func (m Sample) V() float64 { return m.Value }
|
||||||
|
|
||||||
|
func (r *ChunkedReadResponse) PooledMarshal(p *sync.Pool) ([]byte, error) {
|
||||||
|
size := r.Size()
|
||||||
|
data, ok := p.Get().(*[]byte)
|
||||||
|
if ok && cap(*data) >= size {
|
||||||
|
n, err := r.MarshalToSizedBuffer((*data)[:size])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return (*data)[:n], nil
|
||||||
|
}
|
||||||
|
return r.Marshal()
|
||||||
|
}
|
||||||
|
|
3994
prompb/io/prometheus/client/metrics.pb.go
Normal file
3994
prompb/io/prometheus/client/metrics.pb.go
Normal file
File diff suppressed because it is too large
Load diff
146
prompb/io/prometheus/client/metrics.proto
Normal file
146
prompb/io/prometheus/client/metrics.proto
Normal file
|
@ -0,0 +1,146 @@
|
||||||
|
// Copyright 2013 Prometheus Team
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// This is copied and lightly edited from
|
||||||
|
// github.com/prometheus/client_model/io/prometheus/client/metrics.proto
|
||||||
|
// and finally converted to proto3 syntax to make it usable for the
|
||||||
|
// gogo-protobuf approach taken within prometheus/prometheus.
|
||||||
|
|
||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
package io.prometheus.client;
|
||||||
|
option go_package = "io_prometheus_client";
|
||||||
|
|
||||||
|
import "google/protobuf/timestamp.proto";
|
||||||
|
|
||||||
|
message LabelPair {
|
||||||
|
string name = 1;
|
||||||
|
string value = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
enum MetricType {
|
||||||
|
// COUNTER must use the Metric field "counter".
|
||||||
|
COUNTER = 0;
|
||||||
|
// GAUGE must use the Metric field "gauge".
|
||||||
|
GAUGE = 1;
|
||||||
|
// SUMMARY must use the Metric field "summary".
|
||||||
|
SUMMARY = 2;
|
||||||
|
// UNTYPED must use the Metric field "untyped".
|
||||||
|
UNTYPED = 3;
|
||||||
|
// HISTOGRAM must use the Metric field "histogram".
|
||||||
|
HISTOGRAM = 4;
|
||||||
|
// GAUGE_HISTOGRAM must use the Metric field "histogram".
|
||||||
|
GAUGE_HISTOGRAM = 5;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Gauge {
|
||||||
|
double value = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Counter {
|
||||||
|
double value = 1;
|
||||||
|
Exemplar exemplar = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Quantile {
|
||||||
|
double quantile = 1;
|
||||||
|
double value = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Summary {
|
||||||
|
uint64 sample_count = 1;
|
||||||
|
double sample_sum = 2;
|
||||||
|
repeated Quantile quantile = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Untyped {
|
||||||
|
double value = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Histogram {
|
||||||
|
uint64 sample_count = 1;
|
||||||
|
double sample_count_float = 4; // Overrides sample_count if > 0.
|
||||||
|
double sample_sum = 2;
|
||||||
|
// Buckets for the conventional histogram.
|
||||||
|
repeated Bucket bucket = 3; // Ordered in increasing order of upper_bound, +Inf bucket is optional.
|
||||||
|
|
||||||
|
// Everything below here is for native histograms (also known as sparse histograms).
|
||||||
|
// Native histograms are an experimental feature without stability guarantees.
|
||||||
|
|
||||||
|
// schema defines the bucket schema. Currently, valid numbers are -4 <= n <= 8.
|
||||||
|
// They are all for base-2 bucket schemas, where 1 is a bucket boundary in each case, and
|
||||||
|
// then each power of two is divided into 2^n logarithmic buckets.
|
||||||
|
// Or in other words, each bucket boundary is the previous boundary times 2^(2^-n).
|
||||||
|
// In the future, more bucket schemas may be added using numbers < -4 or > 8.
|
||||||
|
sint32 schema = 5;
|
||||||
|
double zero_threshold = 6; // Breadth of the zero bucket.
|
||||||
|
uint64 zero_count = 7; // Count in zero bucket.
|
||||||
|
double zero_count_float = 8; // Overrides sb_zero_count if > 0.
|
||||||
|
|
||||||
|
// Negative buckets for the native histogram.
|
||||||
|
repeated BucketSpan negative_span = 9;
|
||||||
|
// Use either "negative_delta" or "negative_count", the former for
|
||||||
|
// regular histograms with integer counts, the latter for float
|
||||||
|
// histograms.
|
||||||
|
repeated sint64 negative_delta = 10; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||||
|
repeated double negative_count = 11; // Absolute count of each bucket.
|
||||||
|
|
||||||
|
// Positive buckets for the native histogram.
|
||||||
|
repeated BucketSpan positive_span = 12;
|
||||||
|
// Use either "positive_delta" or "positive_count", the former for
|
||||||
|
// regular histograms with integer counts, the latter for float
|
||||||
|
// histograms.
|
||||||
|
repeated sint64 positive_delta = 13; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||||
|
repeated double positive_count = 14; // Absolute count of each bucket.
|
||||||
|
}
|
||||||
|
|
||||||
|
message Bucket {
|
||||||
|
uint64 cumulative_count = 1; // Cumulative in increasing order.
|
||||||
|
double cumulative_count_float = 4; // Overrides cumulative_count if > 0.
|
||||||
|
double upper_bound = 2; // Inclusive.
|
||||||
|
Exemplar exemplar = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
// A BucketSpan defines a number of consecutive buckets in a native
|
||||||
|
// histogram with their offset. Logically, it would be more
|
||||||
|
// straightforward to include the bucket counts in the Span. However,
|
||||||
|
// the protobuf representation is more compact in the way the data is
|
||||||
|
// structured here (with all the buckets in a single array separate
|
||||||
|
// from the Spans).
|
||||||
|
message BucketSpan {
|
||||||
|
sint32 offset = 1; // Gap to previous span, or starting point for 1st span (which can be negative).
|
||||||
|
uint32 length = 2; // Length of consecutive buckets.
|
||||||
|
}
|
||||||
|
|
||||||
|
message Exemplar {
|
||||||
|
repeated LabelPair label = 1;
|
||||||
|
double value = 2;
|
||||||
|
google.protobuf.Timestamp timestamp = 3; // OpenMetrics-style.
|
||||||
|
}
|
||||||
|
|
||||||
|
message Metric {
|
||||||
|
repeated LabelPair label = 1;
|
||||||
|
Gauge gauge = 2;
|
||||||
|
Counter counter = 3;
|
||||||
|
Summary summary = 4;
|
||||||
|
Untyped untyped = 5;
|
||||||
|
Histogram histogram = 7;
|
||||||
|
int64 timestamp_ms = 6;
|
||||||
|
}
|
||||||
|
|
||||||
|
message MetricFamily {
|
||||||
|
string name = 1;
|
||||||
|
string help = 2;
|
||||||
|
MetricType type = 3;
|
||||||
|
repeated Metric metric = 4;
|
||||||
|
}
|
|
@ -34,8 +34,10 @@ const (
|
||||||
// Content-Type: "application/x-protobuf"
|
// Content-Type: "application/x-protobuf"
|
||||||
// Content-Encoding: "snappy"
|
// Content-Encoding: "snappy"
|
||||||
ReadRequest_SAMPLES ReadRequest_ResponseType = 0
|
ReadRequest_SAMPLES ReadRequest_ResponseType = 0
|
||||||
// Server will stream a delimited ChunkedReadResponse message that contains XOR encoded chunks for a single series.
|
// Server will stream a delimited ChunkedReadResponse message that
|
||||||
// Each message is following varint size and fixed size bigendian uint32 for CRC32 Castagnoli checksum.
|
// contains XOR or HISTOGRAM(!) encoded chunks for a single series.
|
||||||
|
// Each message is following varint size and fixed size bigendian
|
||||||
|
// uint32 for CRC32 Castagnoli checksum.
|
||||||
//
|
//
|
||||||
// Response headers:
|
// Response headers:
|
||||||
// Content-Type: "application/x-streamed-protobuf; proto=prometheus.ChunkedReadResponse"
|
// Content-Type: "application/x-streamed-protobuf; proto=prometheus.ChunkedReadResponse"
|
||||||
|
|
|
@ -39,8 +39,10 @@ message ReadRequest {
|
||||||
// Content-Type: "application/x-protobuf"
|
// Content-Type: "application/x-protobuf"
|
||||||
// Content-Encoding: "snappy"
|
// Content-Encoding: "snappy"
|
||||||
SAMPLES = 0;
|
SAMPLES = 0;
|
||||||
// Server will stream a delimited ChunkedReadResponse message that contains XOR encoded chunks for a single series.
|
// Server will stream a delimited ChunkedReadResponse message that
|
||||||
// Each message is following varint size and fixed size bigendian uint32 for CRC32 Castagnoli checksum.
|
// contains XOR or HISTOGRAM(!) encoded chunks for a single series.
|
||||||
|
// Each message is following varint size and fixed size bigendian
|
||||||
|
// uint32 for CRC32 Castagnoli checksum.
|
||||||
//
|
//
|
||||||
// Response headers:
|
// Response headers:
|
||||||
// Content-Type: "application/x-streamed-protobuf; proto=prometheus.ChunkedReadResponse"
|
// Content-Type: "application/x-streamed-protobuf; proto=prometheus.ChunkedReadResponse"
|
||||||
|
|
1534
prompb/types.pb.go
1534
prompb/types.pb.go
File diff suppressed because it is too large
Load diff
|
@ -54,13 +54,79 @@ message Exemplar {
|
||||||
int64 timestamp = 3;
|
int64 timestamp = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// A native histogram, also known as a sparse histogram.
|
||||||
|
// Original design doc:
|
||||||
|
// https://docs.google.com/document/d/1cLNv3aufPZb3fNfaJgdaRBZsInZKKIHo9E6HinJVbpM/edit
|
||||||
|
// The appendix of this design doc also explains the concept of float
|
||||||
|
// histograms. This Histogram message can represent both, the usual
|
||||||
|
// integer histogram as well as a float histogram.
|
||||||
|
message Histogram {
|
||||||
|
enum ResetHint {
|
||||||
|
UNKNOWN = 0; // Need to test for a counter reset explicitly.
|
||||||
|
YES = 1; // This is the 1st histogram after a counter reset.
|
||||||
|
NO = 2; // There was no counter reset between this and the previous Histogram.
|
||||||
|
GAUGE = 3; // This is a gauge histogram where counter resets don't happen.
|
||||||
|
}
|
||||||
|
|
||||||
|
oneof count { // Count of observations in the histogram.
|
||||||
|
uint64 count_int = 1;
|
||||||
|
double count_float = 2;
|
||||||
|
}
|
||||||
|
double sum = 3; // Sum of observations in the histogram.
|
||||||
|
// The schema defines the bucket schema. Currently, valid numbers
|
||||||
|
// are -4 <= n <= 8. They are all for base-2 bucket schemas, where 1
|
||||||
|
// is a bucket boundary in each case, and then each power of two is
|
||||||
|
// divided into 2^n logarithmic buckets. Or in other words, each
|
||||||
|
// bucket boundary is the previous boundary times 2^(2^-n). In the
|
||||||
|
// future, more bucket schemas may be added using numbers < -4 or >
|
||||||
|
// 8.
|
||||||
|
sint32 schema = 4;
|
||||||
|
double zero_threshold = 5; // Breadth of the zero bucket.
|
||||||
|
oneof zero_count { // Count in zero bucket.
|
||||||
|
uint64 zero_count_int = 6;
|
||||||
|
double zero_count_float = 7;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Negative Buckets.
|
||||||
|
repeated BucketSpan negative_spans = 8;
|
||||||
|
// Use either "negative_deltas" or "negative_counts", the former for
|
||||||
|
// regular histograms with integer counts, the latter for float
|
||||||
|
// histograms.
|
||||||
|
repeated sint64 negative_deltas = 9; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||||
|
repeated double negative_counts = 10; // Absolute count of each bucket.
|
||||||
|
|
||||||
|
// Positive Buckets.
|
||||||
|
repeated BucketSpan positive_spans = 11;
|
||||||
|
// Use either "positive_deltas" or "positive_counts", the former for
|
||||||
|
// regular histograms with integer counts, the latter for float
|
||||||
|
// histograms.
|
||||||
|
repeated sint64 positive_deltas = 12; // Count delta of each bucket compared to previous one (or to zero for 1st bucket).
|
||||||
|
repeated double positive_counts = 13; // Absolute count of each bucket.
|
||||||
|
|
||||||
|
ResetHint reset_hint = 14;
|
||||||
|
// timestamp is in ms format, see model/timestamp/timestamp.go for
|
||||||
|
// conversion from time.Time to Prometheus timestamp.
|
||||||
|
int64 timestamp = 15;
|
||||||
|
}
|
||||||
|
|
||||||
|
// A BucketSpan defines a number of consecutive buckets with their
|
||||||
|
// offset. Logically, it would be more straightforward to include the
|
||||||
|
// bucket counts in the Span. However, the protobuf representation is
|
||||||
|
// more compact in the way the data is structured here (with all the
|
||||||
|
// buckets in a single array separate from the Spans).
|
||||||
|
message BucketSpan {
|
||||||
|
sint32 offset = 1; // Gap to previous span, or starting point for 1st span (which can be negative).
|
||||||
|
uint32 length = 2; // Length of consecutive buckets.
|
||||||
|
}
|
||||||
|
|
||||||
// TimeSeries represents samples and labels for a single time series.
|
// TimeSeries represents samples and labels for a single time series.
|
||||||
message TimeSeries {
|
message TimeSeries {
|
||||||
// For a timeseries to be valid, and for the samples and exemplars
|
// For a timeseries to be valid, and for the samples and exemplars
|
||||||
// to be ingested by the remote system properly, the labels field is required.
|
// to be ingested by the remote system properly, the labels field is required.
|
||||||
repeated Label labels = 1 [(gogoproto.nullable) = false];
|
repeated Label labels = 1 [(gogoproto.nullable) = false];
|
||||||
repeated Sample samples = 2 [(gogoproto.nullable) = false];
|
repeated Sample samples = 2 [(gogoproto.nullable) = false];
|
||||||
repeated Exemplar exemplars = 3 [(gogoproto.nullable) = false];
|
repeated Exemplar exemplars = 3 [(gogoproto.nullable) = false];
|
||||||
|
repeated Histogram histograms = 4 [(gogoproto.nullable) = false];
|
||||||
}
|
}
|
||||||
|
|
||||||
message Label {
|
message Label {
|
||||||
|
@ -103,8 +169,9 @@ message Chunk {
|
||||||
|
|
||||||
// We require this to match chunkenc.Encoding.
|
// We require this to match chunkenc.Encoding.
|
||||||
enum Encoding {
|
enum Encoding {
|
||||||
UNKNOWN = 0;
|
UNKNOWN = 0;
|
||||||
XOR = 1;
|
XOR = 1;
|
||||||
|
HISTOGRAM = 2;
|
||||||
}
|
}
|
||||||
Encoding type = 3;
|
Encoding type = 3;
|
||||||
bytes data = 4;
|
bytes data = 4;
|
||||||
|
|
211
promql/engine.go
211
promql/engine.go
|
@ -37,11 +37,13 @@ import (
|
||||||
"go.opentelemetry.io/otel/trace"
|
"go.opentelemetry.io/otel/trace"
|
||||||
"golang.org/x/exp/slices"
|
"golang.org/x/exp/slices"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/timestamp"
|
"github.com/prometheus/prometheus/model/timestamp"
|
||||||
"github.com/prometheus/prometheus/model/value"
|
"github.com/prometheus/prometheus/model/value"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/util/stats"
|
"github.com/prometheus/prometheus/util/stats"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -198,7 +200,6 @@ func (q *query) Exec(ctx context.Context) *Result {
|
||||||
|
|
||||||
// Exec query.
|
// Exec query.
|
||||||
res, warnings, err := q.ng.exec(ctx, q)
|
res, warnings, err := q.ng.exec(ctx, q)
|
||||||
|
|
||||||
return &Result{Err: err, Value: res, Warnings: warnings}
|
return &Result{Err: err, Value: res, Warnings: warnings}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -677,7 +678,7 @@ func (ng *Engine) execEvalStmt(ctx context.Context, query *query, s *parser.Eval
|
||||||
for i, s := range mat {
|
for i, s := range mat {
|
||||||
// Point might have a different timestamp, force it to the evaluation
|
// Point might have a different timestamp, force it to the evaluation
|
||||||
// timestamp as that is when we ran the evaluation.
|
// timestamp as that is when we ran the evaluation.
|
||||||
vector[i] = Sample{Metric: s.Metric, Point: Point{V: s.Points[0].V, T: start}}
|
vector[i] = Sample{Metric: s.Metric, Point: Point{V: s.Points[0].V, H: s.Points[0].H, T: start}}
|
||||||
}
|
}
|
||||||
return vector, warnings, nil
|
return vector, warnings, nil
|
||||||
case parser.ValueTypeScalar:
|
case parser.ValueTypeScalar:
|
||||||
|
@ -981,8 +982,10 @@ func (ev *evaluator) recover(expr parser.Expr, ws *storage.Warnings, errp *error
|
||||||
case errWithWarnings:
|
case errWithWarnings:
|
||||||
*errp = err.err
|
*errp = err.err
|
||||||
*ws = append(*ws, err.warnings...)
|
*ws = append(*ws, err.warnings...)
|
||||||
|
case error:
|
||||||
|
*errp = err
|
||||||
default:
|
default:
|
||||||
*errp = e.(error)
|
*errp = fmt.Errorf("%v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1011,7 +1014,7 @@ type EvalNodeHelper struct {
|
||||||
// Caches.
|
// Caches.
|
||||||
// DropMetricName and label_*.
|
// DropMetricName and label_*.
|
||||||
Dmn map[uint64]labels.Labels
|
Dmn map[uint64]labels.Labels
|
||||||
// funcHistogramQuantile.
|
// funcHistogramQuantile for conventional histograms.
|
||||||
signatureToMetricWithBuckets map[string]*metricWithBuckets
|
signatureToMetricWithBuckets map[string]*metricWithBuckets
|
||||||
// label_replace.
|
// label_replace.
|
||||||
regex *regexp.Regexp
|
regex *regexp.Regexp
|
||||||
|
@ -1428,7 +1431,7 @@ func (ev *evaluator) eval(expr parser.Expr) (parser.Value, storage.Warnings) {
|
||||||
ev.samplesStats.IncrementSamplesAtStep(step, int64(len(points)))
|
ev.samplesStats.IncrementSamplesAtStep(step, int64(len(points)))
|
||||||
enh.Out = outVec[:0]
|
enh.Out = outVec[:0]
|
||||||
if len(outVec) > 0 {
|
if len(outVec) > 0 {
|
||||||
ss.Points = append(ss.Points, Point{V: outVec[0].Point.V, T: ts})
|
ss.Points = append(ss.Points, Point{V: outVec[0].Point.V, H: outVec[0].Point.H, T: ts})
|
||||||
}
|
}
|
||||||
// Only buffer stepRange milliseconds from the second step on.
|
// Only buffer stepRange milliseconds from the second step on.
|
||||||
it.ReduceDelta(stepRange)
|
it.ReduceDelta(stepRange)
|
||||||
|
@ -1581,10 +1584,10 @@ func (ev *evaluator) eval(expr parser.Expr) (parser.Value, storage.Warnings) {
|
||||||
|
|
||||||
for ts, step := ev.startTimestamp, -1; ts <= ev.endTimestamp; ts += ev.interval {
|
for ts, step := ev.startTimestamp, -1; ts <= ev.endTimestamp; ts += ev.interval {
|
||||||
step++
|
step++
|
||||||
_, v, ok := ev.vectorSelectorSingle(it, e, ts)
|
_, v, h, ok := ev.vectorSelectorSingle(it, e, ts)
|
||||||
if ok {
|
if ok {
|
||||||
if ev.currentSamples < ev.maxSamples {
|
if ev.currentSamples < ev.maxSamples {
|
||||||
ss.Points = append(ss.Points, Point{V: v, T: ts})
|
ss.Points = append(ss.Points, Point{V: v, H: h, T: ts})
|
||||||
ev.samplesStats.IncrementSamplesAtStep(step, 1)
|
ev.samplesStats.IncrementSamplesAtStep(step, 1)
|
||||||
ev.currentSamples++
|
ev.currentSamples++
|
||||||
} else {
|
} else {
|
||||||
|
@ -1694,6 +1697,7 @@ func (ev *evaluator) eval(expr parser.Expr) (parser.Value, storage.Warnings) {
|
||||||
mat[i].Points = append(mat[i].Points, Point{
|
mat[i].Points = append(mat[i].Points, Point{
|
||||||
T: ts,
|
T: ts,
|
||||||
V: mat[i].Points[0].V,
|
V: mat[i].Points[0].V,
|
||||||
|
H: mat[i].Points[0].H,
|
||||||
})
|
})
|
||||||
ev.currentSamples++
|
ev.currentSamples++
|
||||||
if ev.currentSamples > ev.maxSamples {
|
if ev.currentSamples > ev.maxSamples {
|
||||||
|
@ -1719,11 +1723,11 @@ func (ev *evaluator) vectorSelector(node *parser.VectorSelector, ts int64) (Vect
|
||||||
for i, s := range node.Series {
|
for i, s := range node.Series {
|
||||||
it.Reset(s.Iterator())
|
it.Reset(s.Iterator())
|
||||||
|
|
||||||
t, v, ok := ev.vectorSelectorSingle(it, node, ts)
|
t, v, h, ok := ev.vectorSelectorSingle(it, node, ts)
|
||||||
if ok {
|
if ok {
|
||||||
vec = append(vec, Sample{
|
vec = append(vec, Sample{
|
||||||
Metric: node.Series[i].Labels(),
|
Metric: node.Series[i].Labels(),
|
||||||
Point: Point{V: v, T: t},
|
Point: Point{V: v, H: h, T: t},
|
||||||
})
|
})
|
||||||
|
|
||||||
ev.currentSamples++
|
ev.currentSamples++
|
||||||
|
@ -1738,33 +1742,39 @@ func (ev *evaluator) vectorSelector(node *parser.VectorSelector, ts int64) (Vect
|
||||||
return vec, ws
|
return vec, ws
|
||||||
}
|
}
|
||||||
|
|
||||||
// vectorSelectorSingle evaluates a instant vector for the iterator of one time series.
|
// vectorSelectorSingle evaluates an instant vector for the iterator of one time series.
|
||||||
func (ev *evaluator) vectorSelectorSingle(it *storage.MemoizedSeriesIterator, node *parser.VectorSelector, ts int64) (int64, float64, bool) {
|
func (ev *evaluator) vectorSelectorSingle(it *storage.MemoizedSeriesIterator, node *parser.VectorSelector, ts int64) (
|
||||||
|
int64, float64, *histogram.FloatHistogram, bool,
|
||||||
|
) {
|
||||||
refTime := ts - durationMilliseconds(node.Offset)
|
refTime := ts - durationMilliseconds(node.Offset)
|
||||||
var t int64
|
var t int64
|
||||||
var v float64
|
var v float64
|
||||||
|
var h *histogram.FloatHistogram
|
||||||
|
|
||||||
ok := it.Seek(refTime)
|
valueType := it.Seek(refTime)
|
||||||
if !ok {
|
switch valueType {
|
||||||
|
case chunkenc.ValNone:
|
||||||
if it.Err() != nil {
|
if it.Err() != nil {
|
||||||
ev.error(it.Err())
|
ev.error(it.Err())
|
||||||
}
|
}
|
||||||
}
|
case chunkenc.ValFloat:
|
||||||
|
|
||||||
if ok {
|
|
||||||
t, v = it.At()
|
t, v = it.At()
|
||||||
|
case chunkenc.ValHistogram, chunkenc.ValFloatHistogram:
|
||||||
|
t, h = it.AtFloatHistogram()
|
||||||
|
default:
|
||||||
|
panic(fmt.Errorf("unknown value type %v", valueType))
|
||||||
}
|
}
|
||||||
|
if valueType == chunkenc.ValNone || t > refTime {
|
||||||
if !ok || t > refTime {
|
var ok bool
|
||||||
t, v, ok = it.PeekPrev()
|
t, v, _, h, ok = it.PeekPrev()
|
||||||
if !ok || t < refTime-durationMilliseconds(ev.lookbackDelta) {
|
if !ok || t < refTime-durationMilliseconds(ev.lookbackDelta) {
|
||||||
return 0, 0, false
|
return 0, 0, nil, false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if value.IsStaleNaN(v) {
|
if value.IsStaleNaN(v) || (h != nil && value.IsStaleNaN(h.Sum)) {
|
||||||
return 0, 0, false
|
return 0, 0, nil, false
|
||||||
}
|
}
|
||||||
return t, v, true
|
return t, v, h, true
|
||||||
}
|
}
|
||||||
|
|
||||||
var pointPool = sync.Pool{}
|
var pointPool = sync.Pool{}
|
||||||
|
@ -1849,30 +1859,59 @@ func (ev *evaluator) matrixIterSlice(it *storage.BufferedSeriesIterator, mint, m
|
||||||
out = out[:0]
|
out = out[:0]
|
||||||
}
|
}
|
||||||
|
|
||||||
ok := it.Seek(maxt)
|
soughtValueType := it.Seek(maxt)
|
||||||
if !ok {
|
if soughtValueType == chunkenc.ValNone {
|
||||||
if it.Err() != nil {
|
if it.Err() != nil {
|
||||||
ev.error(it.Err())
|
ev.error(it.Err())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
buf := it.Buffer()
|
buf := it.Buffer()
|
||||||
for buf.Next() {
|
loop:
|
||||||
t, v := buf.At()
|
for {
|
||||||
if value.IsStaleNaN(v) {
|
switch buf.Next() {
|
||||||
continue
|
case chunkenc.ValNone:
|
||||||
|
break loop
|
||||||
|
case chunkenc.ValFloatHistogram, chunkenc.ValHistogram:
|
||||||
|
t, h := buf.AtFloatHistogram()
|
||||||
|
if value.IsStaleNaN(h.Sum) {
|
||||||
|
continue loop
|
||||||
|
}
|
||||||
|
// Values in the buffer are guaranteed to be smaller than maxt.
|
||||||
|
if t >= mint {
|
||||||
|
if ev.currentSamples >= ev.maxSamples {
|
||||||
|
ev.error(ErrTooManySamples(env))
|
||||||
|
}
|
||||||
|
ev.currentSamples++
|
||||||
|
out = append(out, Point{T: t, H: h})
|
||||||
|
}
|
||||||
|
case chunkenc.ValFloat:
|
||||||
|
t, v := buf.At()
|
||||||
|
if value.IsStaleNaN(v) {
|
||||||
|
continue loop
|
||||||
|
}
|
||||||
|
// Values in the buffer are guaranteed to be smaller than maxt.
|
||||||
|
if t >= mint {
|
||||||
|
if ev.currentSamples >= ev.maxSamples {
|
||||||
|
ev.error(ErrTooManySamples(env))
|
||||||
|
}
|
||||||
|
ev.currentSamples++
|
||||||
|
out = append(out, Point{T: t, V: v})
|
||||||
|
}
|
||||||
}
|
}
|
||||||
// Values in the buffer are guaranteed to be smaller than maxt.
|
}
|
||||||
if t >= mint {
|
// The sought sample might also be in the range.
|
||||||
|
switch soughtValueType {
|
||||||
|
case chunkenc.ValFloatHistogram, chunkenc.ValHistogram:
|
||||||
|
t, h := it.AtFloatHistogram()
|
||||||
|
if t == maxt && !value.IsStaleNaN(h.Sum) {
|
||||||
if ev.currentSamples >= ev.maxSamples {
|
if ev.currentSamples >= ev.maxSamples {
|
||||||
ev.error(ErrTooManySamples(env))
|
ev.error(ErrTooManySamples(env))
|
||||||
}
|
}
|
||||||
|
out = append(out, Point{T: t, H: h})
|
||||||
ev.currentSamples++
|
ev.currentSamples++
|
||||||
out = append(out, Point{T: t, V: v})
|
|
||||||
}
|
}
|
||||||
}
|
case chunkenc.ValFloat:
|
||||||
// The seeked sample might also be in the range.
|
|
||||||
if ok {
|
|
||||||
t, v := it.At()
|
t, v := it.At()
|
||||||
if t == maxt && !value.IsStaleNaN(v) {
|
if t == maxt && !value.IsStaleNaN(v) {
|
||||||
if ev.currentSamples >= ev.maxSamples {
|
if ev.currentSamples >= ev.maxSamples {
|
||||||
|
@ -2030,10 +2069,12 @@ func (ev *evaluator) VectorBinop(op parser.ItemType, lhs, rhs Vector, matching *
|
||||||
|
|
||||||
// Account for potentially swapped sidedness.
|
// Account for potentially swapped sidedness.
|
||||||
vl, vr := ls.V, rs.V
|
vl, vr := ls.V, rs.V
|
||||||
|
hl, hr := ls.H, rs.H
|
||||||
if matching.Card == parser.CardOneToMany {
|
if matching.Card == parser.CardOneToMany {
|
||||||
vl, vr = vr, vl
|
vl, vr = vr, vl
|
||||||
|
hl, hr = hr, hl
|
||||||
}
|
}
|
||||||
value, keep := vectorElemBinop(op, vl, vr)
|
value, histogramValue, keep := vectorElemBinop(op, vl, vr, hl, hr)
|
||||||
if returnBool {
|
if returnBool {
|
||||||
if keep {
|
if keep {
|
||||||
value = 1.0
|
value = 1.0
|
||||||
|
@ -2068,10 +2109,13 @@ func (ev *evaluator) VectorBinop(op parser.ItemType, lhs, rhs Vector, matching *
|
||||||
insertedSigs[insertSig] = struct{}{}
|
insertedSigs[insertSig] = struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
enh.Out = append(enh.Out, Sample{
|
if (hl != nil && hr != nil) || (hl == nil && hr == nil) {
|
||||||
Metric: metric,
|
// Both lhs and rhs are of same type.
|
||||||
Point: Point{V: value},
|
enh.Out = append(enh.Out, Sample{
|
||||||
})
|
Metric: metric,
|
||||||
|
Point: Point{V: value, H: histogramValue},
|
||||||
|
})
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return enh.Out
|
return enh.Out
|
||||||
}
|
}
|
||||||
|
@ -2149,7 +2193,7 @@ func (ev *evaluator) VectorscalarBinop(op parser.ItemType, lhs Vector, rhs Scala
|
||||||
if swap {
|
if swap {
|
||||||
lv, rv = rv, lv
|
lv, rv = rv, lv
|
||||||
}
|
}
|
||||||
value, keep := vectorElemBinop(op, lv, rv)
|
value, _, keep := vectorElemBinop(op, lv, rv, nil, nil)
|
||||||
// Catch cases where the scalar is the LHS in a scalar-vector comparison operation.
|
// Catch cases where the scalar is the LHS in a scalar-vector comparison operation.
|
||||||
// We want to always keep the vector element value as the output value, even if it's on the RHS.
|
// We want to always keep the vector element value as the output value, even if it's on the RHS.
|
||||||
if op.IsComparisonOperator() && swap {
|
if op.IsComparisonOperator() && swap {
|
||||||
|
@ -2212,45 +2256,56 @@ func scalarBinop(op parser.ItemType, lhs, rhs float64) float64 {
|
||||||
}
|
}
|
||||||
|
|
||||||
// vectorElemBinop evaluates a binary operation between two Vector elements.
|
// vectorElemBinop evaluates a binary operation between two Vector elements.
|
||||||
func vectorElemBinop(op parser.ItemType, lhs, rhs float64) (float64, bool) {
|
func vectorElemBinop(op parser.ItemType, lhs, rhs float64, hlhs, hrhs *histogram.FloatHistogram) (float64, *histogram.FloatHistogram, bool) {
|
||||||
switch op {
|
switch op {
|
||||||
case parser.ADD:
|
case parser.ADD:
|
||||||
return lhs + rhs, true
|
if hlhs != nil && hrhs != nil {
|
||||||
|
// The histogram being added must have the larger schema
|
||||||
|
// code (i.e. the higher resolution).
|
||||||
|
if hrhs.Schema >= hlhs.Schema {
|
||||||
|
return 0, hlhs.Copy().Add(hrhs), true
|
||||||
|
}
|
||||||
|
return 0, hrhs.Copy().Add(hlhs), true
|
||||||
|
}
|
||||||
|
return lhs + rhs, nil, true
|
||||||
case parser.SUB:
|
case parser.SUB:
|
||||||
return lhs - rhs, true
|
return lhs - rhs, nil, true
|
||||||
case parser.MUL:
|
case parser.MUL:
|
||||||
return lhs * rhs, true
|
return lhs * rhs, nil, true
|
||||||
case parser.DIV:
|
case parser.DIV:
|
||||||
return lhs / rhs, true
|
return lhs / rhs, nil, true
|
||||||
case parser.POW:
|
case parser.POW:
|
||||||
return math.Pow(lhs, rhs), true
|
return math.Pow(lhs, rhs), nil, true
|
||||||
case parser.MOD:
|
case parser.MOD:
|
||||||
return math.Mod(lhs, rhs), true
|
return math.Mod(lhs, rhs), nil, true
|
||||||
case parser.EQLC:
|
case parser.EQLC:
|
||||||
return lhs, lhs == rhs
|
return lhs, nil, lhs == rhs
|
||||||
case parser.NEQ:
|
case parser.NEQ:
|
||||||
return lhs, lhs != rhs
|
return lhs, nil, lhs != rhs
|
||||||
case parser.GTR:
|
case parser.GTR:
|
||||||
return lhs, lhs > rhs
|
return lhs, nil, lhs > rhs
|
||||||
case parser.LSS:
|
case parser.LSS:
|
||||||
return lhs, lhs < rhs
|
return lhs, nil, lhs < rhs
|
||||||
case parser.GTE:
|
case parser.GTE:
|
||||||
return lhs, lhs >= rhs
|
return lhs, nil, lhs >= rhs
|
||||||
case parser.LTE:
|
case parser.LTE:
|
||||||
return lhs, lhs <= rhs
|
return lhs, nil, lhs <= rhs
|
||||||
case parser.ATAN2:
|
case parser.ATAN2:
|
||||||
return math.Atan2(lhs, rhs), true
|
return math.Atan2(lhs, rhs), nil, true
|
||||||
}
|
}
|
||||||
panic(fmt.Errorf("operator %q not allowed for operations between Vectors", op))
|
panic(fmt.Errorf("operator %q not allowed for operations between Vectors", op))
|
||||||
}
|
}
|
||||||
|
|
||||||
type groupedAggregation struct {
|
type groupedAggregation struct {
|
||||||
labels labels.Labels
|
hasFloat bool // Has at least 1 float64 sample aggregated.
|
||||||
value float64
|
hasHistogram bool // Has at least 1 histogram sample aggregated.
|
||||||
mean float64
|
labels labels.Labels
|
||||||
groupCount int
|
value float64
|
||||||
heap vectorByValueHeap
|
histogramValue *histogram.FloatHistogram
|
||||||
reverseHeap vectorByReverseValueHeap
|
mean float64
|
||||||
|
groupCount int
|
||||||
|
heap vectorByValueHeap
|
||||||
|
reverseHeap vectorByReverseValueHeap
|
||||||
}
|
}
|
||||||
|
|
||||||
// aggregation evaluates an aggregation operation on a Vector. The provided grouping labels
|
// aggregation evaluates an aggregation operation on a Vector. The provided grouping labels
|
||||||
|
@ -2330,6 +2385,12 @@ func (ev *evaluator) aggregation(op parser.ItemType, grouping []string, without
|
||||||
mean: s.V,
|
mean: s.V,
|
||||||
groupCount: 1,
|
groupCount: 1,
|
||||||
}
|
}
|
||||||
|
if s.H == nil {
|
||||||
|
newAgg.hasFloat = true
|
||||||
|
} else if op == parser.SUM {
|
||||||
|
newAgg.histogramValue = s.H.Copy()
|
||||||
|
newAgg.hasHistogram = true
|
||||||
|
}
|
||||||
|
|
||||||
result[groupingKey] = newAgg
|
result[groupingKey] = newAgg
|
||||||
orderedResult = append(orderedResult, newAgg)
|
orderedResult = append(orderedResult, newAgg)
|
||||||
|
@ -2364,7 +2425,26 @@ func (ev *evaluator) aggregation(op parser.ItemType, grouping []string, without
|
||||||
|
|
||||||
switch op {
|
switch op {
|
||||||
case parser.SUM:
|
case parser.SUM:
|
||||||
group.value += s.V
|
if s.H != nil {
|
||||||
|
group.hasHistogram = true
|
||||||
|
if group.histogramValue != nil {
|
||||||
|
// The histogram being added must have
|
||||||
|
// an equal or larger schema.
|
||||||
|
if s.H.Schema >= group.histogramValue.Schema {
|
||||||
|
group.histogramValue.Add(s.H)
|
||||||
|
} else {
|
||||||
|
h := s.H.Copy()
|
||||||
|
h.Add(group.histogramValue)
|
||||||
|
group.histogramValue = h
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Otherwise the aggregation contained floats
|
||||||
|
// previously and will be invalid anyway. No
|
||||||
|
// point in copying the histogram in that case.
|
||||||
|
} else {
|
||||||
|
group.hasFloat = true
|
||||||
|
group.value += s.V
|
||||||
|
}
|
||||||
|
|
||||||
case parser.AVG:
|
case parser.AVG:
|
||||||
group.groupCount++
|
group.groupCount++
|
||||||
|
@ -2498,13 +2578,18 @@ func (ev *evaluator) aggregation(op parser.ItemType, grouping []string, without
|
||||||
case parser.QUANTILE:
|
case parser.QUANTILE:
|
||||||
aggr.value = quantile(q, aggr.heap)
|
aggr.value = quantile(q, aggr.heap)
|
||||||
|
|
||||||
|
case parser.SUM:
|
||||||
|
if aggr.hasFloat && aggr.hasHistogram {
|
||||||
|
// We cannot aggregate histogram sample with a float64 sample.
|
||||||
|
continue
|
||||||
|
}
|
||||||
default:
|
default:
|
||||||
// For other aggregations, we already have the right value.
|
// For other aggregations, we already have the right value.
|
||||||
}
|
}
|
||||||
|
|
||||||
enh.Out = append(enh.Out, Sample{
|
enh.Out = append(enh.Out, Sample{
|
||||||
Metric: aggr.labels,
|
Metric: aggr.labels,
|
||||||
Point: Point{V: aggr.value},
|
Point: Point{V: aggr.value, H: aggr.histogramValue},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
return enh.Out
|
return enh.Out
|
||||||
|
|
|
@ -17,6 +17,7 @@ import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"math"
|
||||||
"os"
|
"os"
|
||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
@ -29,10 +30,12 @@ import (
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
"go.uber.org/goleak"
|
"go.uber.org/goleak"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/timestamp"
|
"github.com/prometheus/prometheus/model/timestamp"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestMain(m *testing.M) {
|
func TestMain(m *testing.M) {
|
||||||
|
@ -3147,6 +3150,911 @@ func TestRangeQuery(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSparseHistogramRate(t *testing.T) {
|
||||||
|
// TODO(beorn7): Integrate histograms into the PromQL testing framework
|
||||||
|
// and write more tests there.
|
||||||
|
test, err := NewTest(t, "")
|
||||||
|
require.NoError(t, err)
|
||||||
|
defer test.Close()
|
||||||
|
|
||||||
|
seriesName := "sparse_histogram_series"
|
||||||
|
lbls := labels.FromStrings("__name__", seriesName)
|
||||||
|
|
||||||
|
app := test.Storage().Appender(context.TODO())
|
||||||
|
for i, h := range tsdb.GenerateTestHistograms(100) {
|
||||||
|
_, err := app.AppendHistogram(0, lbls, int64(i)*int64(15*time.Second/time.Millisecond), h)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
require.NoError(t, app.Commit())
|
||||||
|
|
||||||
|
require.NoError(t, test.Run())
|
||||||
|
engine := test.QueryEngine()
|
||||||
|
|
||||||
|
queryString := fmt.Sprintf("rate(%s[1m])", seriesName)
|
||||||
|
qry, err := engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(int64(5*time.Minute/time.Millisecond)))
|
||||||
|
require.NoError(t, err)
|
||||||
|
res := qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
vector, err := res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, vector, 1)
|
||||||
|
actualHistogram := vector[0].H
|
||||||
|
expectedHistogram := &histogram.FloatHistogram{
|
||||||
|
Schema: 1,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 1. / 15.,
|
||||||
|
Count: 4. / 15.,
|
||||||
|
Sum: 1.226666666666667,
|
||||||
|
PositiveSpans: []histogram.Span{{Offset: 0, Length: 2}, {Offset: 1, Length: 2}},
|
||||||
|
PositiveBuckets: []float64{1. / 15., 1. / 15., 1. / 15., 1. / 15.},
|
||||||
|
}
|
||||||
|
require.Equal(t, expectedHistogram, actualHistogram)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSparseHistogram_HistogramCountAndSum(t *testing.T) {
|
||||||
|
// TODO(codesome): Integrate histograms into the PromQL testing framework
|
||||||
|
// and write more tests there.
|
||||||
|
h := &histogram.Histogram{
|
||||||
|
Count: 24,
|
||||||
|
ZeroCount: 4,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100,
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{2, 1, -2, 3},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 1, -2, 3},
|
||||||
|
}
|
||||||
|
|
||||||
|
test, err := NewTest(t, "")
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(test.Close)
|
||||||
|
|
||||||
|
seriesName := "sparse_histogram_series"
|
||||||
|
lbls := labels.FromStrings("__name__", seriesName)
|
||||||
|
engine := test.QueryEngine()
|
||||||
|
|
||||||
|
ts := int64(10 * time.Minute / time.Millisecond)
|
||||||
|
app := test.Storage().Appender(context.TODO())
|
||||||
|
_, err = app.AppendHistogram(0, lbls, ts, h)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, app.Commit())
|
||||||
|
|
||||||
|
queryString := fmt.Sprintf("histogram_count(%s)", seriesName)
|
||||||
|
qry, err := engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(ts))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
res := qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
|
||||||
|
vector, err := res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Len(t, vector, 1)
|
||||||
|
require.Nil(t, vector[0].H)
|
||||||
|
require.Equal(t, float64(h.Count), vector[0].V)
|
||||||
|
|
||||||
|
queryString = fmt.Sprintf("histogram_sum(%s)", seriesName)
|
||||||
|
qry, err = engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(ts))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
res = qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
|
||||||
|
vector, err = res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Len(t, vector, 1)
|
||||||
|
require.Nil(t, vector[0].H)
|
||||||
|
require.Equal(t, h.Sum, vector[0].V)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSparseHistogram_HistogramQuantile(t *testing.T) {
|
||||||
|
// TODO(codesome): Integrate histograms into the PromQL testing framework
|
||||||
|
// and write more tests there.
|
||||||
|
type subCase struct {
|
||||||
|
quantile string
|
||||||
|
value float64
|
||||||
|
}
|
||||||
|
|
||||||
|
cases := []struct {
|
||||||
|
text string
|
||||||
|
// Histogram to test.
|
||||||
|
h *histogram.Histogram
|
||||||
|
// Different quantiles to test for this histogram.
|
||||||
|
subCases []subCase
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
text: "all positive buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 12,
|
||||||
|
ZeroCount: 2,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{2, 1, -2, 3},
|
||||||
|
},
|
||||||
|
subCases: []subCase{
|
||||||
|
{
|
||||||
|
quantile: "1.0001",
|
||||||
|
value: math.Inf(1),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "1",
|
||||||
|
value: 16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.99",
|
||||||
|
value: 15.759999999999998,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.9",
|
||||||
|
value: 13.600000000000001,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.6",
|
||||||
|
value: 4.799999999999997,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.5",
|
||||||
|
value: 1.6666666666666665,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.1",
|
||||||
|
value: 0.0006000000000000001,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "-1",
|
||||||
|
value: math.Inf(-1),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: "all negative buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 12,
|
||||||
|
ZeroCount: 2,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 1, -2, 3},
|
||||||
|
},
|
||||||
|
subCases: []subCase{
|
||||||
|
{
|
||||||
|
quantile: "1.0001",
|
||||||
|
value: math.Inf(1),
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "1",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.99",
|
||||||
|
value: -6.000000000000048e-05,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.9",
|
||||||
|
value: -0.0005999999999999996,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.5",
|
||||||
|
value: -1.6666666666666667,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.1",
|
||||||
|
value: -13.6,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0",
|
||||||
|
value: -16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "-1",
|
||||||
|
value: math.Inf(-1),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: "both positive and negative buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 24,
|
||||||
|
ZeroCount: 4,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{2, 1, -2, 3},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 1, -2, 3},
|
||||||
|
},
|
||||||
|
subCases: []subCase{
|
||||||
|
{
|
||||||
|
quantile: "1.0001",
|
||||||
|
value: math.Inf(1),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "1",
|
||||||
|
value: 16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.99",
|
||||||
|
value: 15.519999999999996,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.9",
|
||||||
|
value: 11.200000000000003,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.7",
|
||||||
|
value: 1.2666666666666657,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.55",
|
||||||
|
value: 0.0006000000000000005,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.5",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{ // Zero bucket.
|
||||||
|
quantile: "0.45",
|
||||||
|
value: -0.0005999999999999996,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.3",
|
||||||
|
value: -1.266666666666667,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.1",
|
||||||
|
value: -11.2,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0.01",
|
||||||
|
value: -15.52,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "0",
|
||||||
|
value: -16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
quantile: "-1",
|
||||||
|
value: math.Inf(-1),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
test, err := NewTest(t, "")
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(test.Close)
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(c.text, func(t *testing.T) {
|
||||||
|
seriesName := "sparse_histogram_series"
|
||||||
|
lbls := labels.FromStrings("__name__", seriesName)
|
||||||
|
engine := test.QueryEngine()
|
||||||
|
|
||||||
|
ts := int64(i+1) * int64(10*time.Minute/time.Millisecond)
|
||||||
|
app := test.Storage().Appender(context.TODO())
|
||||||
|
_, err = app.AppendHistogram(0, lbls, ts, c.h)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, app.Commit())
|
||||||
|
|
||||||
|
for j, sc := range c.subCases {
|
||||||
|
t.Run(fmt.Sprintf("%d %s", j, sc.quantile), func(t *testing.T) {
|
||||||
|
queryString := fmt.Sprintf("histogram_quantile(%s, %s)", sc.quantile, seriesName)
|
||||||
|
qry, err := engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(ts))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
res := qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
|
||||||
|
vector, err := res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Len(t, vector, 1)
|
||||||
|
require.Nil(t, vector[0].H)
|
||||||
|
require.True(t, almostEqual(sc.value, vector[0].V))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSparseHistogram_HistogramFraction(t *testing.T) {
|
||||||
|
// TODO(codesome): Integrate histograms into the PromQL testing framework
|
||||||
|
// and write more tests there.
|
||||||
|
type subCase struct {
|
||||||
|
lower, upper string
|
||||||
|
value float64
|
||||||
|
}
|
||||||
|
|
||||||
|
invariantCases := []subCase{
|
||||||
|
{
|
||||||
|
lower: "42",
|
||||||
|
upper: "3.1415",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "0",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0.000001",
|
||||||
|
upper: "0.000001",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "42",
|
||||||
|
upper: "42",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-3.1",
|
||||||
|
upper: "-3.1",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "3.1415",
|
||||||
|
upper: "NaN",
|
||||||
|
value: math.NaN(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "NaN",
|
||||||
|
upper: "42",
|
||||||
|
value: math.NaN(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "NaN",
|
||||||
|
upper: "NaN",
|
||||||
|
value: math.NaN(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-Inf",
|
||||||
|
upper: "+Inf",
|
||||||
|
value: 1,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
cases := []struct {
|
||||||
|
text string
|
||||||
|
// Histogram to test.
|
||||||
|
h *histogram.Histogram
|
||||||
|
// Different ranges to test for this histogram.
|
||||||
|
subCases []subCase
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
text: "empty histogram",
|
||||||
|
h: &histogram.Histogram{},
|
||||||
|
subCases: []subCase{
|
||||||
|
{
|
||||||
|
lower: "3.1415",
|
||||||
|
upper: "42",
|
||||||
|
value: math.NaN(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: "all positive buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 12,
|
||||||
|
ZeroCount: 2,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{2, 1, -2, 3}, // Abs: 2, 3, 1, 4
|
||||||
|
},
|
||||||
|
subCases: append([]subCase{
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "+Inf",
|
||||||
|
value: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-Inf",
|
||||||
|
upper: "0",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-0.001",
|
||||||
|
upper: "0",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "0.001",
|
||||||
|
value: 2. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "0.0005",
|
||||||
|
value: 1. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0.001",
|
||||||
|
upper: "inf",
|
||||||
|
value: 10. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-inf",
|
||||||
|
upper: "-0.001",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "2",
|
||||||
|
value: 3. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "2",
|
||||||
|
value: 1.5 / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "8",
|
||||||
|
value: 4. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "6",
|
||||||
|
value: 3.5 / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "6",
|
||||||
|
value: 2. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-8",
|
||||||
|
upper: "-1",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
}, invariantCases...),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: "all negative buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 12,
|
||||||
|
ZeroCount: 2,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 1, -2, 3},
|
||||||
|
},
|
||||||
|
subCases: append([]subCase{
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "+Inf",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-Inf",
|
||||||
|
upper: "0",
|
||||||
|
value: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-0.001",
|
||||||
|
upper: "0",
|
||||||
|
value: 2. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "0.001",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-0.0005",
|
||||||
|
upper: "0",
|
||||||
|
value: 1. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0.001",
|
||||||
|
upper: "inf",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-inf",
|
||||||
|
upper: "-0.001",
|
||||||
|
value: 10. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "2",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "2",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "8",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "6",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "6",
|
||||||
|
value: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1",
|
||||||
|
value: 3. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 1.5 / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-8",
|
||||||
|
upper: "-1",
|
||||||
|
value: 4. / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1",
|
||||||
|
value: 3.5 / 12.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 2. / 12.,
|
||||||
|
},
|
||||||
|
}, invariantCases...),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: "both positive and negative buckets with zero bucket",
|
||||||
|
h: &histogram.Histogram{
|
||||||
|
Count: 24,
|
||||||
|
ZeroCount: 4,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
Sum: 100, // Does not matter.
|
||||||
|
Schema: 0,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{2, 1, -2, 3},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 1, -2, 3},
|
||||||
|
},
|
||||||
|
subCases: append([]subCase{
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "+Inf",
|
||||||
|
value: 0.5,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-Inf",
|
||||||
|
upper: "0",
|
||||||
|
value: 0.5,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-0.001",
|
||||||
|
upper: "0",
|
||||||
|
value: 2. / 24,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0",
|
||||||
|
upper: "0.001",
|
||||||
|
value: 2. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-0.0005",
|
||||||
|
upper: "0.0005",
|
||||||
|
value: 2. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "0.001",
|
||||||
|
upper: "inf",
|
||||||
|
value: 10. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-inf",
|
||||||
|
upper: "-0.001",
|
||||||
|
value: 10. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "2",
|
||||||
|
value: 3. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "2",
|
||||||
|
value: 1.5 / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "8",
|
||||||
|
value: 4. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1",
|
||||||
|
upper: "6",
|
||||||
|
value: 3.5 / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "1.5",
|
||||||
|
upper: "6",
|
||||||
|
value: 2. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1",
|
||||||
|
value: 3. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-2",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 1.5 / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-8",
|
||||||
|
upper: "-1",
|
||||||
|
value: 4. / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1",
|
||||||
|
value: 3.5 / 24.,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
lower: "-6",
|
||||||
|
upper: "-1.5",
|
||||||
|
value: 2. / 24.,
|
||||||
|
},
|
||||||
|
}, invariantCases...),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(c.text, func(t *testing.T) {
|
||||||
|
test, err := NewTest(t, "")
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(test.Close)
|
||||||
|
|
||||||
|
seriesName := "sparse_histogram_series"
|
||||||
|
lbls := labels.FromStrings("__name__", seriesName)
|
||||||
|
engine := test.QueryEngine()
|
||||||
|
|
||||||
|
ts := int64(i+1) * int64(10*time.Minute/time.Millisecond)
|
||||||
|
app := test.Storage().Appender(context.TODO())
|
||||||
|
_, err = app.AppendHistogram(0, lbls, ts, c.h)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, app.Commit())
|
||||||
|
|
||||||
|
for j, sc := range c.subCases {
|
||||||
|
t.Run(fmt.Sprintf("%d %s %s", j, sc.lower, sc.upper), func(t *testing.T) {
|
||||||
|
queryString := fmt.Sprintf("histogram_fraction(%s, %s, %s)", sc.lower, sc.upper, seriesName)
|
||||||
|
qry, err := engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(ts))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
res := qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
|
||||||
|
vector, err := res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Len(t, vector, 1)
|
||||||
|
require.Nil(t, vector[0].H)
|
||||||
|
if math.IsNaN(sc.value) {
|
||||||
|
require.True(t, math.IsNaN(vector[0].V))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
require.Equal(t, sc.value, vector[0].V)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSparseHistogram_Sum_Count_AddOperator(t *testing.T) {
|
||||||
|
// TODO(codesome): Integrate histograms into the PromQL testing framework
|
||||||
|
// and write more tests there.
|
||||||
|
cases := []struct {
|
||||||
|
histograms []histogram.Histogram
|
||||||
|
expected histogram.FloatHistogram
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
histograms: []histogram.Histogram{
|
||||||
|
{
|
||||||
|
Schema: 0,
|
||||||
|
Count: 21,
|
||||||
|
Sum: 1234.5,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 4,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 1, Length: 2},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 1, -1, 0},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 2, Length: 2},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{2, 2, -3, 8},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Schema: 0,
|
||||||
|
Count: 36,
|
||||||
|
Sum: 2345.6,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 5,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 0},
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 1, Length: 4},
|
||||||
|
{Offset: 2, Length: 0},
|
||||||
|
{Offset: 2, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 3, -2, 5, -2, 0, -3},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Schema: 0,
|
||||||
|
Count: 36,
|
||||||
|
Sum: 1111.1,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 5,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 0},
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []int64{1, 2, -2, 1, -1, 0, 0},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 1, Length: 4},
|
||||||
|
{Offset: 2, Length: 0},
|
||||||
|
{Offset: 2, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []int64{1, 3, -2, 5, -2, 0, -3},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: histogram.FloatHistogram{
|
||||||
|
Schema: 0,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 14,
|
||||||
|
Count: 93,
|
||||||
|
Sum: 4691.2,
|
||||||
|
PositiveSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 3},
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
},
|
||||||
|
PositiveBuckets: []float64{3, 8, 2, 5, 3, 2, 2},
|
||||||
|
NegativeSpans: []histogram.Span{
|
||||||
|
{Offset: 0, Length: 4},
|
||||||
|
{Offset: 0, Length: 2},
|
||||||
|
{Offset: 3, Length: 3},
|
||||||
|
},
|
||||||
|
NegativeBuckets: []float64{2, 6, 8, 4, 15, 9, 10, 10, 4},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, c := range cases {
|
||||||
|
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
|
||||||
|
test, err := NewTest(t, "")
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(test.Close)
|
||||||
|
|
||||||
|
seriesName := "sparse_histogram_series"
|
||||||
|
|
||||||
|
engine := test.QueryEngine()
|
||||||
|
|
||||||
|
ts := int64(i+1) * int64(10*time.Minute/time.Millisecond)
|
||||||
|
app := test.Storage().Appender(context.TODO())
|
||||||
|
for idx, h := range c.histograms {
|
||||||
|
lbls := labels.FromStrings("__name__", seriesName, "idx", fmt.Sprintf("%d", idx))
|
||||||
|
// Since we mutate h later, we need to create a copy here.
|
||||||
|
_, err = app.AppendHistogram(0, lbls, ts, h.Copy())
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
require.NoError(t, app.Commit())
|
||||||
|
|
||||||
|
queryAndCheck := func(queryString string, exp Vector) {
|
||||||
|
qry, err := engine.NewInstantQuery(test.Queryable(), nil, queryString, timestamp.Time(ts))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
res := qry.Exec(test.Context())
|
||||||
|
require.NoError(t, res.Err)
|
||||||
|
|
||||||
|
vector, err := res.Vector()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, exp, vector)
|
||||||
|
}
|
||||||
|
|
||||||
|
// sum().
|
||||||
|
queryString := fmt.Sprintf("sum(%s)", seriesName)
|
||||||
|
queryAndCheck(queryString, []Sample{
|
||||||
|
{Point{T: ts, H: &c.expected}, labels.Labels{}},
|
||||||
|
})
|
||||||
|
|
||||||
|
// + operator.
|
||||||
|
queryString = fmt.Sprintf(`%s{idx="0"}`, seriesName)
|
||||||
|
for idx := 1; idx < len(c.histograms); idx++ {
|
||||||
|
queryString += fmt.Sprintf(` + ignoring(idx) %s{idx="%d"}`, seriesName, idx)
|
||||||
|
}
|
||||||
|
queryAndCheck(queryString, []Sample{
|
||||||
|
{Point{T: ts, H: &c.expected}, labels.Labels{}},
|
||||||
|
})
|
||||||
|
|
||||||
|
// count().
|
||||||
|
queryString = fmt.Sprintf("count(%s)", seriesName)
|
||||||
|
queryAndCheck(queryString, []Sample{
|
||||||
|
{Point{T: ts, V: 3}, labels.Labels{}},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestQueryLookbackDelta(t *testing.T) {
|
func TestQueryLookbackDelta(t *testing.T) {
|
||||||
var (
|
var (
|
||||||
load = `load 5m
|
load = `load 5m
|
||||||
|
|
|
@ -24,6 +24,7 @@ import (
|
||||||
"github.com/grafana/regexp"
|
"github.com/grafana/regexp"
|
||||||
"github.com/prometheus/common/model"
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
)
|
)
|
||||||
|
@ -66,9 +67,11 @@ func extrapolatedRate(vals []parser.Value, args parser.Expressions, enh *EvalNod
|
||||||
ms := args[0].(*parser.MatrixSelector)
|
ms := args[0].(*parser.MatrixSelector)
|
||||||
vs := ms.VectorSelector.(*parser.VectorSelector)
|
vs := ms.VectorSelector.(*parser.VectorSelector)
|
||||||
var (
|
var (
|
||||||
samples = vals[0].(Matrix)[0]
|
samples = vals[0].(Matrix)[0]
|
||||||
rangeStart = enh.Ts - durationMilliseconds(ms.Range+vs.Offset)
|
rangeStart = enh.Ts - durationMilliseconds(ms.Range+vs.Offset)
|
||||||
rangeEnd = enh.Ts - durationMilliseconds(vs.Offset)
|
rangeEnd = enh.Ts - durationMilliseconds(vs.Offset)
|
||||||
|
resultValue float64
|
||||||
|
resultHistogram *histogram.FloatHistogram
|
||||||
)
|
)
|
||||||
|
|
||||||
// No sense in trying to compute a rate without at least two points. Drop
|
// No sense in trying to compute a rate without at least two points. Drop
|
||||||
|
@ -77,14 +80,32 @@ func extrapolatedRate(vals []parser.Value, args parser.Expressions, enh *EvalNod
|
||||||
return enh.Out
|
return enh.Out
|
||||||
}
|
}
|
||||||
|
|
||||||
resultValue := samples.Points[len(samples.Points)-1].V - samples.Points[0].V
|
if samples.Points[0].H != nil {
|
||||||
if isCounter {
|
resultHistogram = histogramRate(samples.Points, isCounter)
|
||||||
var lastValue float64
|
if resultHistogram == nil {
|
||||||
for _, sample := range samples.Points {
|
// Points are a mix of floats and histograms, or the histograms
|
||||||
if sample.V < lastValue {
|
// are not compatible with each other.
|
||||||
resultValue += lastValue
|
// TODO(beorn7): find a way of communicating the exact reason
|
||||||
|
return enh.Out
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
resultValue = samples.Points[len(samples.Points)-1].V - samples.Points[0].V
|
||||||
|
prevValue := samples.Points[0].V
|
||||||
|
// We have to iterate through everything even in the non-counter
|
||||||
|
// case because we have to check that everything is a float.
|
||||||
|
// TODO(beorn7): Find a way to check that earlier, e.g. by
|
||||||
|
// handing in a []FloatPoint and a []HistogramPoint separately.
|
||||||
|
for _, currPoint := range samples.Points[1:] {
|
||||||
|
if currPoint.H != nil {
|
||||||
|
return nil // Range contains a mix of histograms and floats.
|
||||||
}
|
}
|
||||||
lastValue = sample.V
|
if !isCounter {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if currPoint.V < prevValue {
|
||||||
|
resultValue += prevValue
|
||||||
|
}
|
||||||
|
prevValue = currPoint.V
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -95,6 +116,7 @@ func extrapolatedRate(vals []parser.Value, args parser.Expressions, enh *EvalNod
|
||||||
sampledInterval := float64(samples.Points[len(samples.Points)-1].T-samples.Points[0].T) / 1000
|
sampledInterval := float64(samples.Points[len(samples.Points)-1].T-samples.Points[0].T) / 1000
|
||||||
averageDurationBetweenSamples := sampledInterval / float64(len(samples.Points)-1)
|
averageDurationBetweenSamples := sampledInterval / float64(len(samples.Points)-1)
|
||||||
|
|
||||||
|
// TODO(beorn7): Do this for histograms, too.
|
||||||
if isCounter && resultValue > 0 && samples.Points[0].V >= 0 {
|
if isCounter && resultValue > 0 && samples.Points[0].V >= 0 {
|
||||||
// Counters cannot be negative. If we have any slope at
|
// Counters cannot be negative. If we have any slope at
|
||||||
// all (i.e. resultValue went up), we can extrapolate
|
// all (i.e. resultValue went up), we can extrapolate
|
||||||
|
@ -126,16 +148,69 @@ func extrapolatedRate(vals []parser.Value, args parser.Expressions, enh *EvalNod
|
||||||
} else {
|
} else {
|
||||||
extrapolateToInterval += averageDurationBetweenSamples / 2
|
extrapolateToInterval += averageDurationBetweenSamples / 2
|
||||||
}
|
}
|
||||||
resultValue = resultValue * (extrapolateToInterval / sampledInterval)
|
factor := extrapolateToInterval / sampledInterval
|
||||||
if isRate {
|
if isRate {
|
||||||
resultValue = resultValue / ms.Range.Seconds()
|
factor /= ms.Range.Seconds()
|
||||||
|
}
|
||||||
|
if resultHistogram == nil {
|
||||||
|
resultValue *= factor
|
||||||
|
} else {
|
||||||
|
resultHistogram.Scale(factor)
|
||||||
}
|
}
|
||||||
|
|
||||||
return append(enh.Out, Sample{
|
return append(enh.Out, Sample{
|
||||||
Point: Point{V: resultValue},
|
Point: Point{V: resultValue, H: resultHistogram},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// histogramRate is a helper function for extrapolatedRate. It requires
|
||||||
|
// points[0] to be a histogram. It returns nil if any other Point in points is
|
||||||
|
// not a histogram.
|
||||||
|
func histogramRate(points []Point, isCounter bool) *histogram.FloatHistogram {
|
||||||
|
prev := points[0].H // We already know that this is a histogram.
|
||||||
|
last := points[len(points)-1].H
|
||||||
|
if last == nil {
|
||||||
|
return nil // Range contains a mix of histograms and floats.
|
||||||
|
}
|
||||||
|
minSchema := prev.Schema
|
||||||
|
if last.Schema < minSchema {
|
||||||
|
minSchema = last.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// First iteration to find out two things:
|
||||||
|
// - What's the smallest relevant schema?
|
||||||
|
// - Are all data points histograms?
|
||||||
|
// TODO(beorn7): Find a way to check that earlier, e.g. by handing in a
|
||||||
|
// []FloatPoint and a []HistogramPoint separately.
|
||||||
|
for _, currPoint := range points[1 : len(points)-1] {
|
||||||
|
curr := currPoint.H
|
||||||
|
if curr == nil {
|
||||||
|
return nil // Range contains a mix of histograms and floats.
|
||||||
|
}
|
||||||
|
if !isCounter {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if curr.Schema < minSchema {
|
||||||
|
minSchema = curr.Schema
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
h := last.CopyToSchema(minSchema)
|
||||||
|
h.Sub(prev)
|
||||||
|
|
||||||
|
if isCounter {
|
||||||
|
// Second iteration to deal with counter resets.
|
||||||
|
for _, currPoint := range points[1:] {
|
||||||
|
curr := currPoint.H
|
||||||
|
if curr.DetectReset(prev) {
|
||||||
|
h.Add(prev)
|
||||||
|
}
|
||||||
|
prev = curr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return h.Compact(0)
|
||||||
|
}
|
||||||
|
|
||||||
// === delta(Matrix parser.ValueTypeMatrix) Vector ===
|
// === delta(Matrix parser.ValueTypeMatrix) Vector ===
|
||||||
func funcDelta(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
func funcDelta(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
||||||
return extrapolatedRate(vals, args, enh, false, false)
|
return extrapolatedRate(vals, args, enh, false, false)
|
||||||
|
@ -793,6 +868,59 @@ func funcPredictLinear(vals []parser.Value, args parser.Expressions, enh *EvalNo
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// === histogram_count(Vector parser.ValueTypeVector) Vector ===
|
||||||
|
func funcHistogramCount(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
||||||
|
inVec := vals[0].(Vector)
|
||||||
|
|
||||||
|
for _, sample := range inVec {
|
||||||
|
// Skip non-histogram samples.
|
||||||
|
if sample.H == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
enh.Out = append(enh.Out, Sample{
|
||||||
|
Metric: enh.DropMetricName(sample.Metric),
|
||||||
|
Point: Point{V: sample.H.Count},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return enh.Out
|
||||||
|
}
|
||||||
|
|
||||||
|
// === histogram_sum(Vector parser.ValueTypeVector) Vector ===
|
||||||
|
func funcHistogramSum(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
||||||
|
inVec := vals[0].(Vector)
|
||||||
|
|
||||||
|
for _, sample := range inVec {
|
||||||
|
// Skip non-histogram samples.
|
||||||
|
if sample.H == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
enh.Out = append(enh.Out, Sample{
|
||||||
|
Metric: enh.DropMetricName(sample.Metric),
|
||||||
|
Point: Point{V: sample.H.Sum},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return enh.Out
|
||||||
|
}
|
||||||
|
|
||||||
|
// === histogram_fraction(lower, upper parser.ValueTypeScalar, Vector parser.ValueTypeVector) Vector ===
|
||||||
|
func funcHistogramFraction(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
||||||
|
lower := vals[0].(Vector)[0].V
|
||||||
|
upper := vals[1].(Vector)[0].V
|
||||||
|
inVec := vals[2].(Vector)
|
||||||
|
|
||||||
|
for _, sample := range inVec {
|
||||||
|
// Skip non-histogram samples.
|
||||||
|
if sample.H == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
enh.Out = append(enh.Out, Sample{
|
||||||
|
Metric: enh.DropMetricName(sample.Metric),
|
||||||
|
Point: Point{V: histogramFraction(lower, upper, sample.H)},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return enh.Out
|
||||||
|
}
|
||||||
|
|
||||||
// === histogram_quantile(k parser.ValueTypeScalar, Vector parser.ValueTypeVector) Vector ===
|
// === histogram_quantile(k parser.ValueTypeScalar, Vector parser.ValueTypeVector) Vector ===
|
||||||
func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *EvalNodeHelper) Vector {
|
||||||
q := vals[0].(Vector)[0].V
|
q := vals[0].(Vector)[0].V
|
||||||
|
@ -805,26 +933,57 @@ func funcHistogramQuantile(vals []parser.Value, args parser.Expressions, enh *Ev
|
||||||
v.buckets = v.buckets[:0]
|
v.buckets = v.buckets[:0]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, el := range inVec {
|
|
||||||
|
var histogramSamples []Sample
|
||||||
|
|
||||||
|
for _, sample := range inVec {
|
||||||
|
// We are only looking for conventional buckets here. Remember
|
||||||
|
// the histograms for later treatment.
|
||||||
|
if sample.H != nil {
|
||||||
|
histogramSamples = append(histogramSamples, sample)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
upperBound, err := strconv.ParseFloat(
|
upperBound, err := strconv.ParseFloat(
|
||||||
el.Metric.Get(model.BucketLabel), 64,
|
sample.Metric.Get(model.BucketLabel), 64,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Oops, no bucket label or malformed label value. Skip.
|
// Oops, no bucket label or malformed label value. Skip.
|
||||||
// TODO(beorn7): Issue a warning somehow.
|
// TODO(beorn7): Issue a warning somehow.
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
enh.lblBuf = el.Metric.BytesWithoutLabels(enh.lblBuf, labels.BucketLabel)
|
enh.lblBuf = sample.Metric.BytesWithoutLabels(enh.lblBuf, labels.BucketLabel)
|
||||||
mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]
|
mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]
|
||||||
if !ok {
|
if !ok {
|
||||||
el.Metric = labels.NewBuilder(el.Metric).
|
sample.Metric = labels.NewBuilder(sample.Metric).
|
||||||
Del(excludedLabels...).
|
Del(excludedLabels...).
|
||||||
Labels(nil)
|
Labels(nil)
|
||||||
|
|
||||||
mb = &metricWithBuckets{el.Metric, nil}
|
mb = &metricWithBuckets{sample.Metric, nil}
|
||||||
enh.signatureToMetricWithBuckets[string(enh.lblBuf)] = mb
|
enh.signatureToMetricWithBuckets[string(enh.lblBuf)] = mb
|
||||||
}
|
}
|
||||||
mb.buckets = append(mb.buckets, bucket{upperBound, el.V})
|
mb.buckets = append(mb.buckets, bucket{upperBound, sample.V})
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// Now deal with the histograms.
|
||||||
|
for _, sample := range histogramSamples {
|
||||||
|
// We have to reconstruct the exact same signature as above for
|
||||||
|
// a conventional histogram, just ignoring any le label.
|
||||||
|
enh.lblBuf = sample.Metric.Bytes(enh.lblBuf)
|
||||||
|
if mb, ok := enh.signatureToMetricWithBuckets[string(enh.lblBuf)]; ok && len(mb.buckets) > 0 {
|
||||||
|
// At this data point, we have conventional histogram
|
||||||
|
// buckets and a native histogram with the same name and
|
||||||
|
// labels. Do not evaluate anything.
|
||||||
|
// TODO(beorn7): Issue a warning somehow.
|
||||||
|
delete(enh.signatureToMetricWithBuckets, string(enh.lblBuf))
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
enh.Out = append(enh.Out, Sample{
|
||||||
|
Metric: enh.DropMetricName(sample.Metric),
|
||||||
|
Point: Point{V: histogramQuantile(q, sample.H)},
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, mb := range enh.signatureToMetricWithBuckets {
|
for _, mb := range enh.signatureToMetricWithBuckets {
|
||||||
|
@ -1103,7 +1262,10 @@ var FunctionCalls = map[string]FunctionCall{
|
||||||
"deriv": funcDeriv,
|
"deriv": funcDeriv,
|
||||||
"exp": funcExp,
|
"exp": funcExp,
|
||||||
"floor": funcFloor,
|
"floor": funcFloor,
|
||||||
|
"histogram_count": funcHistogramCount,
|
||||||
|
"histogram_fraction": funcHistogramFraction,
|
||||||
"histogram_quantile": funcHistogramQuantile,
|
"histogram_quantile": funcHistogramQuantile,
|
||||||
|
"histogram_sum": funcHistogramSum,
|
||||||
"holt_winters": funcHoltWinters,
|
"holt_winters": funcHoltWinters,
|
||||||
"hour": funcHour,
|
"hour": funcHour,
|
||||||
"idelta": funcIdelta,
|
"idelta": funcIdelta,
|
||||||
|
|
|
@ -163,6 +163,21 @@ var Functions = map[string]*Function{
|
||||||
ArgTypes: []ValueType{ValueTypeVector},
|
ArgTypes: []ValueType{ValueTypeVector},
|
||||||
ReturnType: ValueTypeVector,
|
ReturnType: ValueTypeVector,
|
||||||
},
|
},
|
||||||
|
"histogram_count": {
|
||||||
|
Name: "histogram_count",
|
||||||
|
ArgTypes: []ValueType{ValueTypeVector},
|
||||||
|
ReturnType: ValueTypeVector,
|
||||||
|
},
|
||||||
|
"histogram_sum": {
|
||||||
|
Name: "histogram_sum",
|
||||||
|
ArgTypes: []ValueType{ValueTypeVector},
|
||||||
|
ReturnType: ValueTypeVector,
|
||||||
|
},
|
||||||
|
"histogram_fraction": {
|
||||||
|
Name: "histogram_fraction",
|
||||||
|
ArgTypes: []ValueType{ValueTypeScalar, ValueTypeScalar, ValueTypeVector},
|
||||||
|
ReturnType: ValueTypeVector,
|
||||||
|
},
|
||||||
"histogram_quantile": {
|
"histogram_quantile": {
|
||||||
Name: "histogram_quantile",
|
Name: "histogram_quantile",
|
||||||
ArgTypes: []ValueType{ValueTypeScalar, ValueTypeVector},
|
ArgTypes: []ValueType{ValueTypeScalar, ValueTypeVector},
|
||||||
|
|
|
@ -17,6 +17,7 @@ import (
|
||||||
"math"
|
"math"
|
||||||
"sort"
|
"sort"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -119,6 +120,176 @@ func bucketQuantile(q float64, buckets buckets) float64 {
|
||||||
return bucketStart + (bucketEnd-bucketStart)*(rank/count)
|
return bucketStart + (bucketEnd-bucketStart)*(rank/count)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// histogramQuantile calculates the quantile 'q' based on the given histogram.
|
||||||
|
//
|
||||||
|
// The quantile value is interpolated assuming a linear distribution within a
|
||||||
|
// bucket.
|
||||||
|
// TODO(beorn7): Find an interpolation method that is a better fit for
|
||||||
|
// exponential buckets (and think about configurable interpolation).
|
||||||
|
//
|
||||||
|
// A natural lower bound of 0 is assumed if the histogram has only positive
|
||||||
|
// buckets. Likewise, a natural upper bound of 0 is assumed if the histogram has
|
||||||
|
// only negative buckets.
|
||||||
|
// TODO(beorn7): Come to terms if we want that.
|
||||||
|
//
|
||||||
|
// There are a number of special cases (once we have a way to report errors
|
||||||
|
// happening during evaluations of AST functions, we should report those
|
||||||
|
// explicitly):
|
||||||
|
//
|
||||||
|
// If the histogram has 0 observations, NaN is returned.
|
||||||
|
//
|
||||||
|
// If q<0, -Inf is returned.
|
||||||
|
//
|
||||||
|
// If q>1, +Inf is returned.
|
||||||
|
//
|
||||||
|
// If q is NaN, NaN is returned.
|
||||||
|
func histogramQuantile(q float64, h *histogram.FloatHistogram) float64 {
|
||||||
|
if q < 0 {
|
||||||
|
return math.Inf(-1)
|
||||||
|
}
|
||||||
|
if q > 1 {
|
||||||
|
return math.Inf(+1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if h.Count == 0 || math.IsNaN(q) {
|
||||||
|
return math.NaN()
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
bucket histogram.Bucket[float64]
|
||||||
|
count float64
|
||||||
|
it = h.AllBucketIterator()
|
||||||
|
rank = q * h.Count
|
||||||
|
)
|
||||||
|
for it.Next() {
|
||||||
|
bucket = it.At()
|
||||||
|
count += bucket.Count
|
||||||
|
if count >= rank {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if bucket.Lower < 0 && bucket.Upper > 0 {
|
||||||
|
if len(h.NegativeBuckets) == 0 && len(h.PositiveBuckets) > 0 {
|
||||||
|
// The result is in the zero bucket and the histogram has only
|
||||||
|
// positive buckets. So we consider 0 to be the lower bound.
|
||||||
|
bucket.Lower = 0
|
||||||
|
} else if len(h.PositiveBuckets) == 0 && len(h.NegativeBuckets) > 0 {
|
||||||
|
// The result is in the zero bucket and the histogram has only
|
||||||
|
// negative buckets. So we consider 0 to be the upper bound.
|
||||||
|
bucket.Upper = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Due to numerical inaccuracies, we could end up with a higher count
|
||||||
|
// than h.Count. Thus, make sure count is never higher than h.Count.
|
||||||
|
if count > h.Count {
|
||||||
|
count = h.Count
|
||||||
|
}
|
||||||
|
// We could have hit the highest bucket without even reaching the rank
|
||||||
|
// (this should only happen if the histogram contains observations of
|
||||||
|
// the value NaN), in which case we simply return the upper limit of the
|
||||||
|
// highest explicit bucket.
|
||||||
|
if count < rank {
|
||||||
|
return bucket.Upper
|
||||||
|
}
|
||||||
|
|
||||||
|
rank -= count - bucket.Count
|
||||||
|
// TODO(codesome): Use a better estimation than linear.
|
||||||
|
return bucket.Lower + (bucket.Upper-bucket.Lower)*(rank/bucket.Count)
|
||||||
|
}
|
||||||
|
|
||||||
|
// histogramFraction calculates the fraction of observations between the
|
||||||
|
// provided lower and upper bounds, based on the provided histogram.
|
||||||
|
//
|
||||||
|
// histogramFraction is in a certain way the inverse of histogramQuantile. If
|
||||||
|
// histogramQuantile(0.9, h) returns 123.4, then histogramFraction(-Inf, 123.4, h)
|
||||||
|
// returns 0.9.
|
||||||
|
//
|
||||||
|
// The same notes (and TODOs) with regard to interpolation and assumptions about
|
||||||
|
// the zero bucket boundaries apply as for histogramQuantile.
|
||||||
|
//
|
||||||
|
// Whether either boundary is inclusive or exclusive doesn’t actually matter as
|
||||||
|
// long as interpolation has to be performed anyway. In the case of a boundary
|
||||||
|
// coinciding with a bucket boundary, the inclusive or exclusive nature of the
|
||||||
|
// boundary determines the exact behavior of the threshold. With the current
|
||||||
|
// implementation, that means that lower is exclusive for positive values and
|
||||||
|
// inclusive for negative values, while upper is inclusive for positive values
|
||||||
|
// and exclusive for negative values.
|
||||||
|
//
|
||||||
|
// Special cases:
|
||||||
|
//
|
||||||
|
// If the histogram has 0 observations, NaN is returned.
|
||||||
|
//
|
||||||
|
// Use a lower bound of -Inf to get the fraction of all observations below the
|
||||||
|
// upper bound.
|
||||||
|
//
|
||||||
|
// Use an upper bound of +Inf to get the fraction of all observations above the
|
||||||
|
// lower bound.
|
||||||
|
//
|
||||||
|
// If lower or upper is NaN, NaN is returned.
|
||||||
|
//
|
||||||
|
// If lower >= upper and the histogram has at least 1 observation, zero is returned.
|
||||||
|
func histogramFraction(lower, upper float64, h *histogram.FloatHistogram) float64 {
|
||||||
|
if h.Count == 0 || math.IsNaN(lower) || math.IsNaN(upper) {
|
||||||
|
return math.NaN()
|
||||||
|
}
|
||||||
|
if lower >= upper {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
rank, lowerRank, upperRank float64
|
||||||
|
lowerSet, upperSet bool
|
||||||
|
it = h.AllBucketIterator()
|
||||||
|
)
|
||||||
|
for it.Next() {
|
||||||
|
b := it.At()
|
||||||
|
if b.Lower < 0 && b.Upper > 0 {
|
||||||
|
if len(h.NegativeBuckets) == 0 && len(h.PositiveBuckets) > 0 {
|
||||||
|
// This is the zero bucket and the histogram has only
|
||||||
|
// positive buckets. So we consider 0 to be the lower
|
||||||
|
// bound.
|
||||||
|
b.Lower = 0
|
||||||
|
} else if len(h.PositiveBuckets) == 0 && len(h.NegativeBuckets) > 0 {
|
||||||
|
// This is in the zero bucket and the histogram has only
|
||||||
|
// negative buckets. So we consider 0 to be the upper
|
||||||
|
// bound.
|
||||||
|
b.Upper = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !lowerSet && b.Lower >= lower {
|
||||||
|
lowerRank = rank
|
||||||
|
lowerSet = true
|
||||||
|
}
|
||||||
|
if !upperSet && b.Lower >= upper {
|
||||||
|
upperRank = rank
|
||||||
|
upperSet = true
|
||||||
|
}
|
||||||
|
if lowerSet && upperSet {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if !lowerSet && b.Lower < lower && b.Upper > lower {
|
||||||
|
lowerRank = rank + b.Count*(lower-b.Lower)/(b.Upper-b.Lower)
|
||||||
|
lowerSet = true
|
||||||
|
}
|
||||||
|
if !upperSet && b.Lower < upper && b.Upper > upper {
|
||||||
|
upperRank = rank + b.Count*(upper-b.Lower)/(b.Upper-b.Lower)
|
||||||
|
upperSet = true
|
||||||
|
}
|
||||||
|
if lowerSet && upperSet {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
rank += b.Count
|
||||||
|
}
|
||||||
|
if !lowerSet || lowerRank > h.Count {
|
||||||
|
lowerRank = h.Count
|
||||||
|
}
|
||||||
|
if !upperSet || upperRank > h.Count {
|
||||||
|
upperRank = h.Count
|
||||||
|
}
|
||||||
|
|
||||||
|
return (upperRank - lowerRank) / h.Count
|
||||||
|
}
|
||||||
|
|
||||||
// coalesceBuckets merges buckets with the same upper bound.
|
// coalesceBuckets merges buckets with the same upper bound.
|
||||||
//
|
//
|
||||||
// The input buckets must be sorted.
|
// The input buckets must be sorted.
|
||||||
|
|
|
@ -21,6 +21,7 @@ import (
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
|
@ -47,7 +48,7 @@ func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
{
|
{
|
||||||
Metric: labels.FromStrings("__name__", "metric1"),
|
Metric: labels.FromStrings("__name__", "metric1"),
|
||||||
Points: []Point{
|
Points: []Point{
|
||||||
{0, 1}, {10000, 2}, {20000, 3}, {30000, 4}, {40000, 5},
|
{0, 1, nil}, {10000, 2, nil}, {20000, 3, nil}, {30000, 4, nil}, {40000, 5, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -58,7 +59,7 @@ func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
{
|
{
|
||||||
Metric: labels.FromStrings("__name__", "metric1"),
|
Metric: labels.FromStrings("__name__", "metric1"),
|
||||||
Points: []Point{
|
Points: []Point{
|
||||||
{0, 1}, {10000, 2}, {20000, 3}, {30000, 4}, {40000, 5},
|
{0, 1, nil}, {10000, 2, nil}, {20000, 3, nil}, {30000, 4, nil}, {40000, 5, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -69,7 +70,7 @@ func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
{
|
{
|
||||||
Metric: labels.FromStrings("__name__", "metric1"),
|
Metric: labels.FromStrings("__name__", "metric1"),
|
||||||
Points: []Point{
|
Points: []Point{
|
||||||
{0, 1}, {10000, 2}, {20000, 3}, {30000, 4}, {40000, 5}, {50000, 6}, {60000, 7},
|
{0, 1, nil}, {10000, 2, nil}, {20000, 3, nil}, {30000, 4, nil}, {40000, 5, nil}, {50000, 6, nil}, {60000, 7, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -89,13 +90,13 @@ func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
{
|
{
|
||||||
Metric: labels.FromStrings("__name__", "metric1"),
|
Metric: labels.FromStrings("__name__", "metric1"),
|
||||||
Points: []Point{
|
Points: []Point{
|
||||||
{0, 1}, {10000, 1}, {20000, 1}, {30000, 1}, {40000, 1}, {50000, 1},
|
{0, 1, nil}, {10000, 1, nil}, {20000, 1, nil}, {30000, 1, nil}, {40000, 1, nil}, {50000, 1, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Metric: labels.FromStrings("__name__", "metric2"),
|
Metric: labels.FromStrings("__name__", "metric2"),
|
||||||
Points: []Point{
|
Points: []Point{
|
||||||
{0, 1}, {10000, 2}, {20000, 3}, {30000, 4}, {40000, 5}, {50000, 6}, {60000, 7}, {70000, 8},
|
{0, 1, nil}, {10000, 2, nil}, {20000, 3, nil}, {30000, 4, nil}, {40000, 5, nil}, {50000, 6, nil}, {60000, 7, nil}, {70000, 8, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -143,7 +144,7 @@ func TestLazyLoader_WithSamplesTill(t *testing.T) {
|
||||||
Metric: storageSeries.Labels(),
|
Metric: storageSeries.Labels(),
|
||||||
}
|
}
|
||||||
it := storageSeries.Iterator()
|
it := storageSeries.Iterator()
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
t, v := it.At()
|
t, v := it.At()
|
||||||
got.Points = append(got.Points, Point{T: t, V: v})
|
got.Points = append(got.Points, Point{T: t, V: v})
|
||||||
}
|
}
|
||||||
|
|
140
promql/value.go
140
promql/value.go
|
@ -20,6 +20,7 @@ import (
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
@ -63,8 +64,8 @@ func (s Scalar) MarshalJSON() ([]byte, error) {
|
||||||
|
|
||||||
// Series is a stream of data points belonging to a metric.
|
// Series is a stream of data points belonging to a metric.
|
||||||
type Series struct {
|
type Series struct {
|
||||||
Metric labels.Labels `json:"metric"`
|
Metric labels.Labels
|
||||||
Points []Point `json:"values"`
|
Points []Point
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s Series) String() string {
|
func (s Series) String() string {
|
||||||
|
@ -75,15 +76,48 @@ func (s Series) String() string {
|
||||||
return fmt.Sprintf("%s =>\n%s", s.Metric, strings.Join(vals, "\n"))
|
return fmt.Sprintf("%s =>\n%s", s.Metric, strings.Join(vals, "\n"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MarshalJSON is mirrored in web/api/v1/api.go for efficiency reasons.
|
||||||
|
// This implementation is still provided for debug purposes and usage
|
||||||
|
// without jsoniter.
|
||||||
|
func (s Series) MarshalJSON() ([]byte, error) {
|
||||||
|
// Note that this is rather inefficient because it re-creates the whole
|
||||||
|
// series, just separated by Histogram Points and Value Points. For API
|
||||||
|
// purposes, there is a more efficcient jsoniter implementation in
|
||||||
|
// web/api/v1/api.go.
|
||||||
|
series := struct {
|
||||||
|
M labels.Labels `json:"metric"`
|
||||||
|
V []Point `json:"values,omitempty"`
|
||||||
|
H []Point `json:"histograms,omitempty"`
|
||||||
|
}{
|
||||||
|
M: s.Metric,
|
||||||
|
}
|
||||||
|
for _, p := range s.Points {
|
||||||
|
if p.H == nil {
|
||||||
|
series.V = append(series.V, p)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
series.H = append(series.H, p)
|
||||||
|
}
|
||||||
|
return json.Marshal(series)
|
||||||
|
}
|
||||||
|
|
||||||
// Point represents a single data point for a given timestamp.
|
// Point represents a single data point for a given timestamp.
|
||||||
|
// If H is not nil, then this is a histogram point and only (T, H) is valid.
|
||||||
|
// If H is nil, then only (T, V) is valid.
|
||||||
type Point struct {
|
type Point struct {
|
||||||
T int64
|
T int64
|
||||||
V float64
|
V float64
|
||||||
|
H *histogram.FloatHistogram
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p Point) String() string {
|
func (p Point) String() string {
|
||||||
v := strconv.FormatFloat(p.V, 'f', -1, 64)
|
var s string
|
||||||
return fmt.Sprintf("%v @[%v]", v, p.T)
|
if p.H != nil {
|
||||||
|
s = p.H.String()
|
||||||
|
} else {
|
||||||
|
s = strconv.FormatFloat(p.V, 'f', -1, 64)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s @[%v]", s, p.T)
|
||||||
}
|
}
|
||||||
|
|
||||||
// MarshalJSON implements json.Marshaler.
|
// MarshalJSON implements json.Marshaler.
|
||||||
|
@ -96,8 +130,45 @@ func (p Point) String() string {
|
||||||
// slightly different results in terms of formatting and rounding of the
|
// slightly different results in terms of formatting and rounding of the
|
||||||
// timestamp.
|
// timestamp.
|
||||||
func (p Point) MarshalJSON() ([]byte, error) {
|
func (p Point) MarshalJSON() ([]byte, error) {
|
||||||
v := strconv.FormatFloat(p.V, 'f', -1, 64)
|
if p.H == nil {
|
||||||
return json.Marshal([...]interface{}{float64(p.T) / 1000, v})
|
v := strconv.FormatFloat(p.V, 'f', -1, 64)
|
||||||
|
return json.Marshal([...]interface{}{float64(p.T) / 1000, v})
|
||||||
|
}
|
||||||
|
h := struct {
|
||||||
|
Count string `json:"count"`
|
||||||
|
Sum string `json:"sum"`
|
||||||
|
Buckets [][]interface{} `json:"buckets,omitempty"`
|
||||||
|
}{
|
||||||
|
Count: strconv.FormatFloat(p.H.Count, 'f', -1, 64),
|
||||||
|
Sum: strconv.FormatFloat(p.H.Sum, 'f', -1, 64),
|
||||||
|
}
|
||||||
|
it := p.H.AllBucketIterator()
|
||||||
|
for it.Next() {
|
||||||
|
bucket := it.At()
|
||||||
|
if bucket.Count == 0 {
|
||||||
|
continue // No need to expose empty buckets in JSON.
|
||||||
|
}
|
||||||
|
boundaries := 2 // Exclusive on both sides AKA open interval.
|
||||||
|
if bucket.LowerInclusive {
|
||||||
|
if bucket.UpperInclusive {
|
||||||
|
boundaries = 3 // Inclusive on both sides AKA closed interval.
|
||||||
|
} else {
|
||||||
|
boundaries = 1 // Inclusive only on lower end AKA right open.
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if bucket.UpperInclusive {
|
||||||
|
boundaries = 0 // Inclusive only on upper end AKA left open.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
bucketToMarshal := []interface{}{
|
||||||
|
boundaries,
|
||||||
|
strconv.FormatFloat(bucket.Lower, 'f', -1, 64),
|
||||||
|
strconv.FormatFloat(bucket.Upper, 'f', -1, 64),
|
||||||
|
strconv.FormatFloat(bucket.Count, 'f', -1, 64),
|
||||||
|
}
|
||||||
|
h.Buckets = append(h.Buckets, bucketToMarshal)
|
||||||
|
}
|
||||||
|
return json.Marshal([...]interface{}{float64(p.T) / 1000, h})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sample is a single sample belonging to a metric.
|
// Sample is a single sample belonging to a metric.
|
||||||
|
@ -111,15 +182,27 @@ func (s Sample) String() string {
|
||||||
return fmt.Sprintf("%s => %s", s.Metric, s.Point)
|
return fmt.Sprintf("%s => %s", s.Metric, s.Point)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MarshalJSON is mirrored in web/api/v1/api.go with jsoniter because Point
|
||||||
|
// wouldn't be marshaled with jsoniter in all cases otherwise.
|
||||||
func (s Sample) MarshalJSON() ([]byte, error) {
|
func (s Sample) MarshalJSON() ([]byte, error) {
|
||||||
v := struct {
|
if s.Point.H == nil {
|
||||||
|
v := struct {
|
||||||
|
M labels.Labels `json:"metric"`
|
||||||
|
V Point `json:"value"`
|
||||||
|
}{
|
||||||
|
M: s.Metric,
|
||||||
|
V: s.Point,
|
||||||
|
}
|
||||||
|
return json.Marshal(v)
|
||||||
|
}
|
||||||
|
h := struct {
|
||||||
M labels.Labels `json:"metric"`
|
M labels.Labels `json:"metric"`
|
||||||
V Point `json:"value"`
|
H Point `json:"histogram"`
|
||||||
}{
|
}{
|
||||||
M: s.Metric,
|
M: s.Metric,
|
||||||
V: s.Point,
|
H: s.Point,
|
||||||
}
|
}
|
||||||
return json.Marshal(v)
|
return json.Marshal(h)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Vector is basically only an alias for model.Samples, but the
|
// Vector is basically only an alias for model.Samples, but the
|
||||||
|
@ -296,19 +379,23 @@ func newStorageSeriesIterator(series Series) *storageSeriesIterator {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ssi *storageSeriesIterator) Seek(t int64) bool {
|
func (ssi *storageSeriesIterator) Seek(t int64) chunkenc.ValueType {
|
||||||
i := ssi.curr
|
i := ssi.curr
|
||||||
if i < 0 {
|
if i < 0 {
|
||||||
i = 0
|
i = 0
|
||||||
}
|
}
|
||||||
for ; i < len(ssi.points); i++ {
|
for ; i < len(ssi.points); i++ {
|
||||||
if ssi.points[i].T >= t {
|
p := ssi.points[i]
|
||||||
|
if p.T >= t {
|
||||||
ssi.curr = i
|
ssi.curr = i
|
||||||
return true
|
if p.H != nil {
|
||||||
|
return chunkenc.ValFloatHistogram
|
||||||
|
}
|
||||||
|
return chunkenc.ValFloat
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ssi.curr = len(ssi.points) - 1
|
ssi.curr = len(ssi.points) - 1
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ssi *storageSeriesIterator) At() (t int64, v float64) {
|
func (ssi *storageSeriesIterator) At() (t int64, v float64) {
|
||||||
|
@ -316,9 +403,30 @@ func (ssi *storageSeriesIterator) At() (t int64, v float64) {
|
||||||
return p.T, p.V
|
return p.T, p.V
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ssi *storageSeriesIterator) Next() bool {
|
func (ssi *storageSeriesIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
panic(errors.New("storageSeriesIterator: AtHistogram not supported"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ssi *storageSeriesIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
p := ssi.points[ssi.curr]
|
||||||
|
return p.T, p.H
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ssi *storageSeriesIterator) AtT() int64 {
|
||||||
|
p := ssi.points[ssi.curr]
|
||||||
|
return p.T
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ssi *storageSeriesIterator) Next() chunkenc.ValueType {
|
||||||
ssi.curr++
|
ssi.curr++
|
||||||
return ssi.curr < len(ssi.points)
|
if ssi.curr >= len(ssi.points) {
|
||||||
|
return chunkenc.ValNone
|
||||||
|
}
|
||||||
|
p := ssi.points[ssi.curr]
|
||||||
|
if p.H != nil {
|
||||||
|
return chunkenc.ValFloatHistogram
|
||||||
|
}
|
||||||
|
return chunkenc.ValFloat
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ssi *storageSeriesIterator) Err() error {
|
func (ssi *storageSeriesIterator) Err() error {
|
||||||
|
|
|
@ -99,7 +99,7 @@ func TestAlertingRuleLabelsUpdate(t *testing.T) {
|
||||||
|
|
||||||
results := []promql.Vector{
|
results := []promql.Vector{
|
||||||
{
|
{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -112,7 +112,7 @@ func TestAlertingRuleLabelsUpdate(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -125,7 +125,7 @@ func TestAlertingRuleLabelsUpdate(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -138,7 +138,7 @@ func TestAlertingRuleLabelsUpdate(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -209,7 +209,7 @@ func TestAlertingRuleExternalLabelsInTemplate(t *testing.T) {
|
||||||
true, log.NewNopLogger(),
|
true, log.NewNopLogger(),
|
||||||
)
|
)
|
||||||
result := promql.Vector{
|
result := promql.Vector{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "ExternalLabelDoesNotExist",
|
"alertname", "ExternalLabelDoesNotExist",
|
||||||
|
@ -220,7 +220,7 @@ func TestAlertingRuleExternalLabelsInTemplate(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "ExternalLabelExists",
|
"alertname", "ExternalLabelExists",
|
||||||
|
@ -303,7 +303,7 @@ func TestAlertingRuleExternalURLInTemplate(t *testing.T) {
|
||||||
true, log.NewNopLogger(),
|
true, log.NewNopLogger(),
|
||||||
)
|
)
|
||||||
result := promql.Vector{
|
result := promql.Vector{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "ExternalURLDoesNotExist",
|
"alertname", "ExternalURLDoesNotExist",
|
||||||
|
@ -314,7 +314,7 @@ func TestAlertingRuleExternalURLInTemplate(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "ExternalURLExists",
|
"alertname", "ExternalURLExists",
|
||||||
|
@ -387,7 +387,7 @@ func TestAlertingRuleEmptyLabelFromTemplate(t *testing.T) {
|
||||||
true, log.NewNopLogger(),
|
true, log.NewNopLogger(),
|
||||||
)
|
)
|
||||||
result := promql.Vector{
|
result := promql.Vector{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "EmptyLabel",
|
"alertname", "EmptyLabel",
|
||||||
|
|
|
@ -39,6 +39,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/promql"
|
"github.com/prometheus/prometheus/promql"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/util/strutil"
|
"github.com/prometheus/prometheus/util/strutil"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -201,7 +202,7 @@ func EngineQueryFunc(engine *promql.Engine, q storage.Queryable) QueryFunc {
|
||||||
return v, nil
|
return v, nil
|
||||||
case promql.Scalar:
|
case promql.Scalar:
|
||||||
return promql.Vector{promql.Sample{
|
return promql.Vector{promql.Sample{
|
||||||
Point: promql.Point(v),
|
Point: promql.Point{T: v.T, V: v.V},
|
||||||
Metric: labels.Labels{},
|
Metric: labels.Labels{},
|
||||||
}}, nil
|
}}, nil
|
||||||
default:
|
default:
|
||||||
|
@ -821,7 +822,7 @@ func (g *Group) RestoreForState(ts time.Time) {
|
||||||
var t int64
|
var t int64
|
||||||
var v float64
|
var v float64
|
||||||
it := s.Iterator()
|
it := s.Iterator()
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
t, v = it.At()
|
t, v = it.At()
|
||||||
}
|
}
|
||||||
if it.Err() != nil {
|
if it.Err() != nil {
|
||||||
|
|
|
@ -39,6 +39,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/promql"
|
"github.com/prometheus/prometheus/promql"
|
||||||
"github.com/prometheus/prometheus/promql/parser"
|
"github.com/prometheus/prometheus/promql/parser"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/util/teststorage"
|
"github.com/prometheus/prometheus/util/teststorage"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -69,7 +70,7 @@ func TestAlertingRule(t *testing.T) {
|
||||||
labels.EmptyLabels(), labels.EmptyLabels(), "", true, nil,
|
labels.EmptyLabels(), labels.EmptyLabels(), "", true, nil,
|
||||||
)
|
)
|
||||||
result := promql.Vector{
|
result := promql.Vector{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -81,7 +82,7 @@ func TestAlertingRule(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -93,7 +94,7 @@ func TestAlertingRule(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -105,7 +106,7 @@ func TestAlertingRule(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS",
|
"__name__", "ALERTS",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -214,7 +215,7 @@ func TestForStateAddSamples(t *testing.T) {
|
||||||
labels.EmptyLabels(), labels.EmptyLabels(), "", true, nil,
|
labels.EmptyLabels(), labels.EmptyLabels(), "", true, nil,
|
||||||
)
|
)
|
||||||
result := promql.Vector{
|
result := promql.Vector{
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS_FOR_STATE",
|
"__name__", "ALERTS_FOR_STATE",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -225,7 +226,7 @@ func TestForStateAddSamples(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS_FOR_STATE",
|
"__name__", "ALERTS_FOR_STATE",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -236,7 +237,7 @@ func TestForStateAddSamples(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS_FOR_STATE",
|
"__name__", "ALERTS_FOR_STATE",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -247,7 +248,7 @@ func TestForStateAddSamples(t *testing.T) {
|
||||||
),
|
),
|
||||||
Point: promql.Point{V: 1},
|
Point: promql.Point{V: 1},
|
||||||
},
|
},
|
||||||
{
|
promql.Sample{
|
||||||
Metric: labels.FromStrings(
|
Metric: labels.FromStrings(
|
||||||
"__name__", "ALERTS_FOR_STATE",
|
"__name__", "ALERTS_FOR_STATE",
|
||||||
"alertname", "HTTPRequestRateLow",
|
"alertname", "HTTPRequestRateLow",
|
||||||
|
@ -612,7 +613,7 @@ func readSeriesSet(ss storage.SeriesSet) (map[string][]promql.Point, error) {
|
||||||
|
|
||||||
points := []promql.Point{}
|
points := []promql.Point{}
|
||||||
it := series.Iterator()
|
it := series.Iterator()
|
||||||
for it.Next() {
|
for it.Next() == chunkenc.ValFloat {
|
||||||
t, v := it.At()
|
t, v := it.At()
|
||||||
points = append(points, promql.Point{T: t, V: v})
|
points = append(points, promql.Point{T: t, V: v})
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,6 +20,7 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/metadata"
|
"github.com/prometheus/prometheus/model/metadata"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
@ -41,6 +42,10 @@ func (a nopAppender) AppendExemplar(storage.SeriesRef, labels.Labels, exemplar.E
|
||||||
return 0, nil
|
return 0, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a nopAppender) AppendHistogram(storage.SeriesRef, labels.Labels, int64, *histogram.Histogram) (storage.SeriesRef, error) {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (a nopAppender) UpdateMetadata(storage.SeriesRef, labels.Labels, metadata.Metadata) (storage.SeriesRef, error) {
|
func (a nopAppender) UpdateMetadata(storage.SeriesRef, labels.Labels, metadata.Metadata) (storage.SeriesRef, error) {
|
||||||
return 0, nil
|
return 0, nil
|
||||||
}
|
}
|
||||||
|
@ -54,17 +59,25 @@ type sample struct {
|
||||||
v float64
|
v float64
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type histogramSample struct {
|
||||||
|
t int64
|
||||||
|
h *histogram.Histogram
|
||||||
|
}
|
||||||
|
|
||||||
// collectResultAppender records all samples that were added through the appender.
|
// collectResultAppender records all samples that were added through the appender.
|
||||||
// It can be used as its zero value or be backed by another appender it writes samples through.
|
// It can be used as its zero value or be backed by another appender it writes samples through.
|
||||||
type collectResultAppender struct {
|
type collectResultAppender struct {
|
||||||
next storage.Appender
|
next storage.Appender
|
||||||
result []sample
|
result []sample
|
||||||
pendingResult []sample
|
pendingResult []sample
|
||||||
rolledbackResult []sample
|
rolledbackResult []sample
|
||||||
pendingExemplars []exemplar.Exemplar
|
pendingExemplars []exemplar.Exemplar
|
||||||
resultExemplars []exemplar.Exemplar
|
resultExemplars []exemplar.Exemplar
|
||||||
pendingMetadata []metadata.Metadata
|
resultHistograms []histogramSample
|
||||||
resultMetadata []metadata.Metadata
|
pendingHistograms []histogramSample
|
||||||
|
rolledbackHistograms []histogramSample
|
||||||
|
pendingMetadata []metadata.Metadata
|
||||||
|
resultMetadata []metadata.Metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *collectResultAppender) Append(ref storage.SeriesRef, lset labels.Labels, t int64, v float64) (storage.SeriesRef, error) {
|
func (a *collectResultAppender) Append(ref storage.SeriesRef, lset labels.Labels, t int64, v float64) (storage.SeriesRef, error) {
|
||||||
|
@ -97,6 +110,15 @@ func (a *collectResultAppender) AppendExemplar(ref storage.SeriesRef, l labels.L
|
||||||
return a.next.AppendExemplar(ref, l, e)
|
return a.next.AppendExemplar(ref, l, e)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *collectResultAppender) AppendHistogram(ref storage.SeriesRef, l labels.Labels, t int64, h *histogram.Histogram) (storage.SeriesRef, error) {
|
||||||
|
a.pendingHistograms = append(a.pendingHistograms, histogramSample{h: h, t: t})
|
||||||
|
if a.next == nil {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.next.AppendHistogram(ref, l, t, h)
|
||||||
|
}
|
||||||
|
|
||||||
func (a *collectResultAppender) UpdateMetadata(ref storage.SeriesRef, l labels.Labels, m metadata.Metadata) (storage.SeriesRef, error) {
|
func (a *collectResultAppender) UpdateMetadata(ref storage.SeriesRef, l labels.Labels, m metadata.Metadata) (storage.SeriesRef, error) {
|
||||||
a.pendingMetadata = append(a.pendingMetadata, m)
|
a.pendingMetadata = append(a.pendingMetadata, m)
|
||||||
if ref == 0 {
|
if ref == 0 {
|
||||||
|
@ -112,9 +134,11 @@ func (a *collectResultAppender) UpdateMetadata(ref storage.SeriesRef, l labels.L
|
||||||
func (a *collectResultAppender) Commit() error {
|
func (a *collectResultAppender) Commit() error {
|
||||||
a.result = append(a.result, a.pendingResult...)
|
a.result = append(a.result, a.pendingResult...)
|
||||||
a.resultExemplars = append(a.resultExemplars, a.pendingExemplars...)
|
a.resultExemplars = append(a.resultExemplars, a.pendingExemplars...)
|
||||||
|
a.resultHistograms = append(a.resultHistograms, a.pendingHistograms...)
|
||||||
a.resultMetadata = append(a.resultMetadata, a.pendingMetadata...)
|
a.resultMetadata = append(a.resultMetadata, a.pendingMetadata...)
|
||||||
a.pendingResult = nil
|
a.pendingResult = nil
|
||||||
a.pendingExemplars = nil
|
a.pendingExemplars = nil
|
||||||
|
a.pendingHistograms = nil
|
||||||
a.pendingMetadata = nil
|
a.pendingMetadata = nil
|
||||||
if a.next == nil {
|
if a.next == nil {
|
||||||
return nil
|
return nil
|
||||||
|
@ -124,7 +148,9 @@ func (a *collectResultAppender) Commit() error {
|
||||||
|
|
||||||
func (a *collectResultAppender) Rollback() error {
|
func (a *collectResultAppender) Rollback() error {
|
||||||
a.rolledbackResult = a.pendingResult
|
a.rolledbackResult = a.pendingResult
|
||||||
|
a.rolledbackHistograms = a.pendingHistograms
|
||||||
a.pendingResult = nil
|
a.pendingResult = nil
|
||||||
|
a.pendingHistograms = nil
|
||||||
if a.next == nil {
|
if a.next == nil {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -132,6 +132,9 @@ type Options struct {
|
||||||
// Option to enable the experimental in-memory metadata storage and append
|
// Option to enable the experimental in-memory metadata storage and append
|
||||||
// metadata to the WAL.
|
// metadata to the WAL.
|
||||||
EnableMetadataStorage bool
|
EnableMetadataStorage bool
|
||||||
|
// Option to enable protobuf negotiation with the client. Note that the client can already
|
||||||
|
// send protobuf without needing to enable this.
|
||||||
|
EnableProtobufNegotiation bool
|
||||||
// Option to increase the interval used by scrape manager to throttle target groups updates.
|
// Option to increase the interval used by scrape manager to throttle target groups updates.
|
||||||
DiscoveryReloadInterval model.Duration
|
DiscoveryReloadInterval model.Duration
|
||||||
|
|
||||||
|
|
|
@ -40,6 +40,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/config"
|
"github.com/prometheus/prometheus/config"
|
||||||
"github.com/prometheus/prometheus/discovery/targetgroup"
|
"github.com/prometheus/prometheus/discovery/targetgroup"
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/metadata"
|
"github.com/prometheus/prometheus/model/metadata"
|
||||||
"github.com/prometheus/prometheus/model/relabel"
|
"github.com/prometheus/prometheus/model/relabel"
|
||||||
|
@ -242,6 +243,8 @@ type scrapePool struct {
|
||||||
newLoop func(scrapeLoopOptions) loop
|
newLoop func(scrapeLoopOptions) loop
|
||||||
|
|
||||||
noDefaultPort bool
|
noDefaultPort bool
|
||||||
|
|
||||||
|
enableProtobufNegotiation bool
|
||||||
}
|
}
|
||||||
|
|
||||||
type labelLimits struct {
|
type labelLimits struct {
|
||||||
|
@ -283,15 +286,16 @@ func newScrapePool(cfg *config.ScrapeConfig, app storage.Appendable, jitterSeed
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
sp := &scrapePool{
|
sp := &scrapePool{
|
||||||
cancel: cancel,
|
cancel: cancel,
|
||||||
appendable: app,
|
appendable: app,
|
||||||
config: cfg,
|
config: cfg,
|
||||||
client: client,
|
client: client,
|
||||||
activeTargets: map[uint64]*Target{},
|
activeTargets: map[uint64]*Target{},
|
||||||
loops: map[uint64]loop{},
|
loops: map[uint64]loop{},
|
||||||
logger: logger,
|
logger: logger,
|
||||||
httpOpts: options.HTTPClientOptions,
|
httpOpts: options.HTTPClientOptions,
|
||||||
noDefaultPort: options.NoDefaultPort,
|
noDefaultPort: options.NoDefaultPort,
|
||||||
|
enableProtobufNegotiation: options.EnableProtobufNegotiation,
|
||||||
}
|
}
|
||||||
sp.newLoop = func(opts scrapeLoopOptions) loop {
|
sp.newLoop = func(opts scrapeLoopOptions) loop {
|
||||||
// Update the targets retrieval function for metadata to a new scrape cache.
|
// Update the targets retrieval function for metadata to a new scrape cache.
|
||||||
|
@ -432,8 +436,12 @@ func (sp *scrapePool) reload(cfg *config.ScrapeConfig) error {
|
||||||
|
|
||||||
t := sp.activeTargets[fp]
|
t := sp.activeTargets[fp]
|
||||||
interval, timeout, err := t.intervalAndTimeout(interval, timeout)
|
interval, timeout, err := t.intervalAndTimeout(interval, timeout)
|
||||||
|
acceptHeader := scrapeAcceptHeader
|
||||||
|
if sp.enableProtobufNegotiation {
|
||||||
|
acceptHeader = scrapeAcceptHeaderWithProtobuf
|
||||||
|
}
|
||||||
var (
|
var (
|
||||||
s = &targetScraper{Target: t, client: sp.client, timeout: timeout, bodySizeLimit: bodySizeLimit}
|
s = &targetScraper{Target: t, client: sp.client, timeout: timeout, bodySizeLimit: bodySizeLimit, acceptHeader: acceptHeader}
|
||||||
newLoop = sp.newLoop(scrapeLoopOptions{
|
newLoop = sp.newLoop(scrapeLoopOptions{
|
||||||
target: t,
|
target: t,
|
||||||
scraper: s,
|
scraper: s,
|
||||||
|
@ -536,8 +544,11 @@ func (sp *scrapePool) sync(targets []*Target) {
|
||||||
// for every target.
|
// for every target.
|
||||||
var err error
|
var err error
|
||||||
interval, timeout, err = t.intervalAndTimeout(interval, timeout)
|
interval, timeout, err = t.intervalAndTimeout(interval, timeout)
|
||||||
|
acceptHeader := scrapeAcceptHeader
|
||||||
s := &targetScraper{Target: t, client: sp.client, timeout: timeout, bodySizeLimit: bodySizeLimit}
|
if sp.enableProtobufNegotiation {
|
||||||
|
acceptHeader = scrapeAcceptHeaderWithProtobuf
|
||||||
|
}
|
||||||
|
s := &targetScraper{Target: t, client: sp.client, timeout: timeout, bodySizeLimit: bodySizeLimit, acceptHeader: acceptHeader}
|
||||||
l := sp.newLoop(scrapeLoopOptions{
|
l := sp.newLoop(scrapeLoopOptions{
|
||||||
target: t,
|
target: t,
|
||||||
scraper: s,
|
scraper: s,
|
||||||
|
@ -756,11 +767,15 @@ type targetScraper struct {
|
||||||
buf *bufio.Reader
|
buf *bufio.Reader
|
||||||
|
|
||||||
bodySizeLimit int64
|
bodySizeLimit int64
|
||||||
|
acceptHeader string
|
||||||
}
|
}
|
||||||
|
|
||||||
var errBodySizeLimit = errors.New("body size limit exceeded")
|
var errBodySizeLimit = errors.New("body size limit exceeded")
|
||||||
|
|
||||||
const acceptHeader = `application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1;q=0.75,text/plain;version=0.0.4;q=0.5,*/*;q=0.1`
|
const (
|
||||||
|
scrapeAcceptHeader = `application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1;q=0.75,text/plain;version=0.0.4;q=0.5,*/*;q=0.1`
|
||||||
|
scrapeAcceptHeaderWithProtobuf = `application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited,application/openmetrics-text;version=1.0.0;q=0.8,application/openmetrics-text;version=0.0.1;q=0.75,text/plain;version=0.0.4;q=0.5,*/*;q=0.1`
|
||||||
|
)
|
||||||
|
|
||||||
var UserAgent = fmt.Sprintf("Prometheus/%s", version.Version)
|
var UserAgent = fmt.Sprintf("Prometheus/%s", version.Version)
|
||||||
|
|
||||||
|
@ -770,7 +785,7 @@ func (s *targetScraper) scrape(ctx context.Context, w io.Writer) (string, error)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
req.Header.Add("Accept", acceptHeader)
|
req.Header.Add("Accept", s.acceptHeader)
|
||||||
req.Header.Add("Accept-Encoding", "gzip")
|
req.Header.Add("Accept-Encoding", "gzip")
|
||||||
req.Header.Set("User-Agent", UserAgent)
|
req.Header.Set("User-Agent", UserAgent)
|
||||||
req.Header.Set("X-Prometheus-Scrape-Timeout-Seconds", strconv.FormatFloat(s.timeout.Seconds(), 'f', -1, 64))
|
req.Header.Set("X-Prometheus-Scrape-Timeout-Seconds", strconv.FormatFloat(s.timeout.Seconds(), 'f', -1, 64))
|
||||||
|
@ -1510,8 +1525,12 @@ func (sl *scrapeLoop) append(app storage.Appender, b []byte, contentType string,
|
||||||
loop:
|
loop:
|
||||||
for {
|
for {
|
||||||
var (
|
var (
|
||||||
et textparse.Entry
|
et textparse.Entry
|
||||||
sampleAdded bool
|
sampleAdded, isHistogram bool
|
||||||
|
met []byte
|
||||||
|
parsedTimestamp *int64
|
||||||
|
val float64
|
||||||
|
h *histogram.Histogram
|
||||||
)
|
)
|
||||||
if et, err = p.Next(); err != nil {
|
if et, err = p.Next(); err != nil {
|
||||||
if err == io.EOF {
|
if err == io.EOF {
|
||||||
|
@ -1531,17 +1550,24 @@ loop:
|
||||||
continue
|
continue
|
||||||
case textparse.EntryComment:
|
case textparse.EntryComment:
|
||||||
continue
|
continue
|
||||||
|
case textparse.EntryHistogram:
|
||||||
|
isHistogram = true
|
||||||
default:
|
default:
|
||||||
}
|
}
|
||||||
total++
|
total++
|
||||||
|
|
||||||
t := defTime
|
t := defTime
|
||||||
met, tp, v := p.Series()
|
if isHistogram {
|
||||||
if !sl.honorTimestamps {
|
met, parsedTimestamp, h, _ = p.Histogram()
|
||||||
tp = nil
|
// TODO: ingest float histograms in tsdb.
|
||||||
|
} else {
|
||||||
|
met, parsedTimestamp, val = p.Series()
|
||||||
}
|
}
|
||||||
if tp != nil {
|
if !sl.honorTimestamps {
|
||||||
t = *tp
|
parsedTimestamp = nil
|
||||||
|
}
|
||||||
|
if parsedTimestamp != nil {
|
||||||
|
t = *parsedTimestamp
|
||||||
}
|
}
|
||||||
|
|
||||||
// Zero metadata out for current iteration until it's resolved.
|
// Zero metadata out for current iteration until it's resolved.
|
||||||
|
@ -1594,8 +1620,14 @@ loop:
|
||||||
updateMetadata(lset, true)
|
updateMetadata(lset, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
ref, err = app.Append(ref, lset, t, v)
|
if isHistogram {
|
||||||
sampleAdded, err = sl.checkAddError(ce, met, tp, err, &sampleLimitErr, &appErrs)
|
if h != nil {
|
||||||
|
ref, err = app.AppendHistogram(ref, lset, t, h)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
ref, err = app.Append(ref, lset, t, val)
|
||||||
|
}
|
||||||
|
sampleAdded, err = sl.checkAddError(ce, met, parsedTimestamp, err, &sampleLimitErr, &appErrs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if err != storage.ErrNotFound {
|
if err != storage.ErrNotFound {
|
||||||
level.Debug(sl.l).Log("msg", "Unexpected error", "series", string(met), "err", err)
|
level.Debug(sl.l).Log("msg", "Unexpected error", "series", string(met), "err", err)
|
||||||
|
@ -1604,7 +1636,7 @@ loop:
|
||||||
}
|
}
|
||||||
|
|
||||||
if !ok {
|
if !ok {
|
||||||
if tp == nil {
|
if parsedTimestamp == nil {
|
||||||
// Bypass staleness logic if there is an explicit timestamp.
|
// Bypass staleness logic if there is an explicit timestamp.
|
||||||
sl.cache.trackStaleness(hash, lset)
|
sl.cache.trackStaleness(hash, lset)
|
||||||
}
|
}
|
||||||
|
|
|
@ -44,6 +44,7 @@ import (
|
||||||
"github.com/prometheus/prometheus/model/timestamp"
|
"github.com/prometheus/prometheus/model/timestamp"
|
||||||
"github.com/prometheus/prometheus/model/value"
|
"github.com/prometheus/prometheus/model/value"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/util/teststorage"
|
"github.com/prometheus/prometheus/util/teststorage"
|
||||||
"github.com/prometheus/prometheus/util/testutil"
|
"github.com/prometheus/prometheus/util/testutil"
|
||||||
)
|
)
|
||||||
|
@ -2146,11 +2147,15 @@ func TestTargetScraperScrapeOK(t *testing.T) {
|
||||||
expectedTimeout = "1.5"
|
expectedTimeout = "1.5"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var protobufParsing bool
|
||||||
|
|
||||||
server := httptest.NewServer(
|
server := httptest.NewServer(
|
||||||
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
accept := r.Header.Get("Accept")
|
if protobufParsing {
|
||||||
if !strings.HasPrefix(accept, "application/openmetrics-text;") {
|
accept := r.Header.Get("Accept")
|
||||||
t.Errorf("Expected Accept header to prefer application/openmetrics-text, got %q", accept)
|
if !strings.HasPrefix(accept, "application/vnd.google.protobuf;") {
|
||||||
|
t.Errorf("Expected Accept header to prefer application/vnd.google.protobuf, got %q", accept)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
timeout := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds")
|
timeout := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds")
|
||||||
|
@ -2169,22 +2174,29 @@ func TestTargetScraperScrapeOK(t *testing.T) {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ts := &targetScraper{
|
runTest := func(acceptHeader string) {
|
||||||
Target: &Target{
|
ts := &targetScraper{
|
||||||
labels: labels.FromStrings(
|
Target: &Target{
|
||||||
model.SchemeLabel, serverURL.Scheme,
|
labels: labels.FromStrings(
|
||||||
model.AddressLabel, serverURL.Host,
|
model.SchemeLabel, serverURL.Scheme,
|
||||||
),
|
model.AddressLabel, serverURL.Host,
|
||||||
},
|
),
|
||||||
client: http.DefaultClient,
|
},
|
||||||
timeout: configTimeout,
|
client: http.DefaultClient,
|
||||||
}
|
timeout: configTimeout,
|
||||||
var buf bytes.Buffer
|
acceptHeader: acceptHeader,
|
||||||
|
}
|
||||||
|
var buf bytes.Buffer
|
||||||
|
|
||||||
contentType, err := ts.scrape(context.Background(), &buf)
|
contentType, err := ts.scrape(context.Background(), &buf)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, "text/plain; version=0.0.4", contentType)
|
require.Equal(t, "text/plain; version=0.0.4", contentType)
|
||||||
require.Equal(t, "metric_a 1\nmetric_b 2\n", buf.String())
|
require.Equal(t, "metric_a 1\nmetric_b 2\n", buf.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
runTest(scrapeAcceptHeader)
|
||||||
|
protobufParsing = true
|
||||||
|
runTest(scrapeAcceptHeaderWithProtobuf)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTargetScrapeScrapeCancel(t *testing.T) {
|
func TestTargetScrapeScrapeCancel(t *testing.T) {
|
||||||
|
@ -2209,7 +2221,8 @@ func TestTargetScrapeScrapeCancel(t *testing.T) {
|
||||||
model.AddressLabel, serverURL.Host,
|
model.AddressLabel, serverURL.Host,
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
client: http.DefaultClient,
|
client: http.DefaultClient,
|
||||||
|
acceptHeader: scrapeAcceptHeader,
|
||||||
}
|
}
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
|
|
||||||
|
@ -2262,7 +2275,8 @@ func TestTargetScrapeScrapeNotFound(t *testing.T) {
|
||||||
model.AddressLabel, serverURL.Host,
|
model.AddressLabel, serverURL.Host,
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
client: http.DefaultClient,
|
client: http.DefaultClient,
|
||||||
|
acceptHeader: scrapeAcceptHeader,
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = ts.scrape(context.Background(), io.Discard)
|
_, err = ts.scrape(context.Background(), io.Discard)
|
||||||
|
@ -2304,6 +2318,7 @@ func TestTargetScraperBodySizeLimit(t *testing.T) {
|
||||||
},
|
},
|
||||||
client: http.DefaultClient,
|
client: http.DefaultClient,
|
||||||
bodySizeLimit: bodySizeLimit,
|
bodySizeLimit: bodySizeLimit,
|
||||||
|
acceptHeader: scrapeAcceptHeader,
|
||||||
}
|
}
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
|
|
||||||
|
@ -2900,7 +2915,7 @@ func TestScrapeReportSingleAppender(t *testing.T) {
|
||||||
c := 0
|
c := 0
|
||||||
for series.Next() {
|
for series.Next() {
|
||||||
i := series.At().Iterator()
|
i := series.At().Iterator()
|
||||||
for i.Next() {
|
for i.Next() != chunkenc.ValNone {
|
||||||
c++
|
c++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2973,7 +2988,7 @@ func TestScrapeReportLimit(t *testing.T) {
|
||||||
var found bool
|
var found bool
|
||||||
for series.Next() {
|
for series.Next() {
|
||||||
i := series.At().Iterator()
|
i := series.At().Iterator()
|
||||||
for i.Next() {
|
for i.Next() == chunkenc.ValFloat {
|
||||||
_, v := i.At()
|
_, v := i.At()
|
||||||
require.Equal(t, 1.0, v)
|
require.Equal(t, 1.0, v)
|
||||||
found = true
|
found = true
|
||||||
|
|
|
@ -40,14 +40,16 @@ for dir in ${DIRS}; do
|
||||||
-I="${PROM_PATH}" \
|
-I="${PROM_PATH}" \
|
||||||
-I="${GRPC_GATEWAY_ROOT}/third_party/googleapis" \
|
-I="${GRPC_GATEWAY_ROOT}/third_party/googleapis" \
|
||||||
./*.proto
|
./*.proto
|
||||||
|
protoc --gogofast_out=Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,paths=source_relative:. -I=. \
|
||||||
|
-I="${GOGOPROTO_PATH}" \
|
||||||
|
./io/prometheus/client/*.proto
|
||||||
sed -i.bak -E 's/import _ \"github.com\/gogo\/protobuf\/gogoproto\"//g' -- *.pb.go
|
sed -i.bak -E 's/import _ \"github.com\/gogo\/protobuf\/gogoproto\"//g' -- *.pb.go
|
||||||
sed -i.bak -E 's/import _ \"google\/protobuf\"//g' -- *.pb.go
|
sed -i.bak -E 's/import _ \"google\/protobuf\"//g' -- *.pb.go
|
||||||
sed -i.bak -E 's/\t_ \"google\/protobuf\"//g' -- *.pb.go
|
sed -i.bak -E 's/\t_ \"google\/protobuf\"//g' -- *.pb.go
|
||||||
sed -i.bak -E 's/golang\/protobuf\/descriptor/gogo\/protobuf\/protoc-gen-gogo\/descriptor/g' -- *.go
|
sed -i.bak -E 's/golang\/protobuf\/descriptor/gogo\/protobuf\/protoc-gen-gogo\/descriptor/g' -- *.go
|
||||||
sed -i.bak -E 's/golang\/protobuf/gogo\/protobuf/g' -- *.go
|
sed -i.bak -E 's/golang\/protobuf/gogo\/protobuf/g' -- *.go
|
||||||
rm -f -- *.bak
|
rm -f -- *.bak
|
||||||
goimports -w ./*.go
|
goimports -w ./*.go ./io/prometheus/client/*.go
|
||||||
popd
|
popd
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
|
@ -14,8 +14,10 @@
|
||||||
package storage
|
package storage
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"math"
|
"math"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -25,8 +27,8 @@ type BufferedSeriesIterator struct {
|
||||||
buf *sampleRing
|
buf *sampleRing
|
||||||
delta int64
|
delta int64
|
||||||
|
|
||||||
lastTime int64
|
lastTime int64
|
||||||
ok bool
|
valueType chunkenc.ValueType
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewBuffer returns a new iterator that buffers the values within the time range
|
// NewBuffer returns a new iterator that buffers the values within the time range
|
||||||
|
@ -39,6 +41,7 @@ func NewBuffer(delta int64) *BufferedSeriesIterator {
|
||||||
// NewBufferIterator returns a new iterator that buffers the values within the
|
// NewBufferIterator returns a new iterator that buffers the values within the
|
||||||
// time range of the current element and the duration of delta before.
|
// time range of the current element and the duration of delta before.
|
||||||
func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterator {
|
func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterator {
|
||||||
|
// TODO(codesome): based on encoding, allocate different buffer.
|
||||||
bit := &BufferedSeriesIterator{
|
bit := &BufferedSeriesIterator{
|
||||||
buf: newSampleRing(delta, 16),
|
buf: newSampleRing(delta, 16),
|
||||||
delta: delta,
|
delta: delta,
|
||||||
|
@ -53,10 +56,9 @@ func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterato
|
||||||
func (b *BufferedSeriesIterator) Reset(it chunkenc.Iterator) {
|
func (b *BufferedSeriesIterator) Reset(it chunkenc.Iterator) {
|
||||||
b.it = it
|
b.it = it
|
||||||
b.lastTime = math.MinInt64
|
b.lastTime = math.MinInt64
|
||||||
b.ok = true
|
|
||||||
b.buf.reset()
|
b.buf.reset()
|
||||||
b.buf.delta = b.delta
|
b.buf.delta = b.delta
|
||||||
it.Next()
|
b.valueType = it.Next()
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReduceDelta lowers the buffered time delta, for the current SeriesIterator only.
|
// ReduceDelta lowers the buffered time delta, for the current SeriesIterator only.
|
||||||
|
@ -66,8 +68,9 @@ func (b *BufferedSeriesIterator) ReduceDelta(delta int64) bool {
|
||||||
|
|
||||||
// PeekBack returns the nth previous element of the iterator. If there is none buffered,
|
// PeekBack returns the nth previous element of the iterator. If there is none buffered,
|
||||||
// ok is false.
|
// ok is false.
|
||||||
func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool) {
|
func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, h *histogram.Histogram, ok bool) {
|
||||||
return b.buf.nthLast(n)
|
s, ok := b.buf.nthLast(n)
|
||||||
|
return s.t, s.v, s.h, ok
|
||||||
}
|
}
|
||||||
|
|
||||||
// Buffer returns an iterator over the buffered data. Invalidates previously
|
// Buffer returns an iterator over the buffered data. Invalidates previously
|
||||||
|
@ -77,63 +80,96 @@ func (b *BufferedSeriesIterator) Buffer() chunkenc.Iterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Seek advances the iterator to the element at time t or greater.
|
// Seek advances the iterator to the element at time t or greater.
|
||||||
func (b *BufferedSeriesIterator) Seek(t int64) bool {
|
func (b *BufferedSeriesIterator) Seek(t int64) chunkenc.ValueType {
|
||||||
t0 := t - b.buf.delta
|
t0 := t - b.buf.delta
|
||||||
|
|
||||||
// If the delta would cause us to seek backwards, preserve the buffer
|
// If the delta would cause us to seek backwards, preserve the buffer
|
||||||
// and just continue regular advancement while filling the buffer on the way.
|
// and just continue regular advancement while filling the buffer on the way.
|
||||||
if b.ok && t0 > b.lastTime {
|
if b.valueType != chunkenc.ValNone && t0 > b.lastTime {
|
||||||
b.buf.reset()
|
b.buf.reset()
|
||||||
|
|
||||||
b.ok = b.it.Seek(t0)
|
b.valueType = b.it.Seek(t0)
|
||||||
if !b.ok {
|
switch b.valueType {
|
||||||
return false
|
case chunkenc.ValNone:
|
||||||
|
return chunkenc.ValNone
|
||||||
|
case chunkenc.ValFloat:
|
||||||
|
b.lastTime, _ = b.At()
|
||||||
|
case chunkenc.ValHistogram:
|
||||||
|
b.lastTime, _ = b.AtHistogram()
|
||||||
|
case chunkenc.ValFloatHistogram:
|
||||||
|
b.lastTime, _ = b.AtFloatHistogram()
|
||||||
|
default:
|
||||||
|
panic(fmt.Errorf("BufferedSeriesIterator: unknown value type %v", b.valueType))
|
||||||
}
|
}
|
||||||
b.lastTime, _ = b.At()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.lastTime >= t {
|
if b.lastTime >= t {
|
||||||
return true
|
return b.valueType
|
||||||
}
|
}
|
||||||
for b.Next() {
|
for {
|
||||||
if b.lastTime >= t {
|
if b.valueType = b.Next(); b.valueType == chunkenc.ValNone || b.lastTime >= t {
|
||||||
return true
|
return b.valueType
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Next advances the iterator to the next element.
|
// Next advances the iterator to the next element.
|
||||||
func (b *BufferedSeriesIterator) Next() bool {
|
func (b *BufferedSeriesIterator) Next() chunkenc.ValueType {
|
||||||
if !b.ok {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add current element to buffer before advancing.
|
// Add current element to buffer before advancing.
|
||||||
b.buf.add(b.it.At())
|
switch b.valueType {
|
||||||
|
case chunkenc.ValNone:
|
||||||
b.ok = b.it.Next()
|
return chunkenc.ValNone
|
||||||
if b.ok {
|
case chunkenc.ValFloat:
|
||||||
b.lastTime, _ = b.At()
|
t, v := b.it.At()
|
||||||
|
b.buf.add(sample{t: t, v: v})
|
||||||
|
case chunkenc.ValHistogram:
|
||||||
|
t, h := b.it.AtHistogram()
|
||||||
|
b.buf.add(sample{t: t, h: h})
|
||||||
|
case chunkenc.ValFloatHistogram:
|
||||||
|
t, fh := b.it.AtFloatHistogram()
|
||||||
|
b.buf.add(sample{t: t, fh: fh})
|
||||||
|
default:
|
||||||
|
panic(fmt.Errorf("BufferedSeriesIterator: unknown value type %v", b.valueType))
|
||||||
}
|
}
|
||||||
|
|
||||||
return b.ok
|
b.valueType = b.it.Next()
|
||||||
|
if b.valueType != chunkenc.ValNone {
|
||||||
|
b.lastTime = b.AtT()
|
||||||
|
}
|
||||||
|
return b.valueType
|
||||||
}
|
}
|
||||||
|
|
||||||
// At returns the current element of the iterator.
|
// At returns the current float element of the iterator.
|
||||||
func (b *BufferedSeriesIterator) At() (int64, float64) {
|
func (b *BufferedSeriesIterator) At() (int64, float64) {
|
||||||
return b.it.At()
|
return b.it.At()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AtHistogram returns the current histogram element of the iterator.
|
||||||
|
func (b *BufferedSeriesIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
return b.it.AtHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
// AtFloatHistogram returns the current float-histogram element of the iterator.
|
||||||
|
func (b *BufferedSeriesIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
return b.it.AtFloatHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
// AtT returns the current timestamp of the iterator.
|
||||||
|
func (b *BufferedSeriesIterator) AtT() int64 {
|
||||||
|
return b.it.AtT()
|
||||||
|
}
|
||||||
|
|
||||||
// Err returns the last encountered error.
|
// Err returns the last encountered error.
|
||||||
func (b *BufferedSeriesIterator) Err() error {
|
func (b *BufferedSeriesIterator) Err() error {
|
||||||
return b.it.Err()
|
return b.it.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO(beorn7): Consider having different sample types for different value types.
|
||||||
type sample struct {
|
type sample struct {
|
||||||
t int64
|
t int64
|
||||||
v float64
|
v float64
|
||||||
|
h *histogram.Histogram
|
||||||
|
fh *histogram.FloatHistogram
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s sample) T() int64 {
|
func (s sample) T() int64 {
|
||||||
|
@ -144,6 +180,25 @@ func (s sample) V() float64 {
|
||||||
return s.v
|
return s.v
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s sample) H() *histogram.Histogram {
|
||||||
|
return s.h
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s sample) FH() *histogram.FloatHistogram {
|
||||||
|
return s.fh
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s sample) Type() chunkenc.ValueType {
|
||||||
|
switch {
|
||||||
|
case s.h != nil:
|
||||||
|
return chunkenc.ValHistogram
|
||||||
|
case s.fh != nil:
|
||||||
|
return chunkenc.ValFloatHistogram
|
||||||
|
default:
|
||||||
|
return chunkenc.ValFloat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
type sampleRing struct {
|
type sampleRing struct {
|
||||||
delta int64
|
delta int64
|
||||||
|
|
||||||
|
@ -176,17 +231,36 @@ func (r *sampleRing) iterator() chunkenc.Iterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
type sampleRingIterator struct {
|
type sampleRingIterator struct {
|
||||||
r *sampleRing
|
r *sampleRing
|
||||||
i int
|
i int
|
||||||
|
t int64
|
||||||
|
v float64
|
||||||
|
h *histogram.Histogram
|
||||||
|
fh *histogram.FloatHistogram
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *sampleRingIterator) Next() bool {
|
func (it *sampleRingIterator) Next() chunkenc.ValueType {
|
||||||
it.i++
|
it.i++
|
||||||
return it.i < it.r.l
|
if it.i >= it.r.l {
|
||||||
|
return chunkenc.ValNone
|
||||||
|
}
|
||||||
|
s := it.r.at(it.i)
|
||||||
|
it.t = s.t
|
||||||
|
switch {
|
||||||
|
case s.h != nil:
|
||||||
|
it.h = s.h
|
||||||
|
return chunkenc.ValHistogram
|
||||||
|
case s.fh != nil:
|
||||||
|
it.fh = s.fh
|
||||||
|
return chunkenc.ValFloatHistogram
|
||||||
|
default:
|
||||||
|
it.v = s.v
|
||||||
|
return chunkenc.ValFloat
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *sampleRingIterator) Seek(int64) bool {
|
func (it *sampleRingIterator) Seek(int64) chunkenc.ValueType {
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *sampleRingIterator) Err() error {
|
func (it *sampleRingIterator) Err() error {
|
||||||
|
@ -194,18 +268,32 @@ func (it *sampleRingIterator) Err() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *sampleRingIterator) At() (int64, float64) {
|
func (it *sampleRingIterator) At() (int64, float64) {
|
||||||
return it.r.at(it.i)
|
return it.t, it.v
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *sampleRing) at(i int) (int64, float64) {
|
func (it *sampleRingIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
return it.t, it.h
|
||||||
|
}
|
||||||
|
|
||||||
|
func (it *sampleRingIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
if it.fh == nil {
|
||||||
|
return it.t, it.h.ToFloat()
|
||||||
|
}
|
||||||
|
return it.t, it.fh
|
||||||
|
}
|
||||||
|
|
||||||
|
func (it *sampleRingIterator) AtT() int64 {
|
||||||
|
return it.t
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *sampleRing) at(i int) sample {
|
||||||
j := (r.f + i) % len(r.buf)
|
j := (r.f + i) % len(r.buf)
|
||||||
s := r.buf[j]
|
return r.buf[j]
|
||||||
return s.t, s.v
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// add adds a sample to the ring buffer and frees all samples that fall
|
// add adds a sample to the ring buffer and frees all samples that fall
|
||||||
// out of the delta range.
|
// out of the delta range.
|
||||||
func (r *sampleRing) add(t int64, v float64) {
|
func (r *sampleRing) add(s sample) {
|
||||||
l := len(r.buf)
|
l := len(r.buf)
|
||||||
// Grow the ring buffer if it fits no more elements.
|
// Grow the ring buffer if it fits no more elements.
|
||||||
if l == r.l {
|
if l == r.l {
|
||||||
|
@ -224,11 +312,11 @@ func (r *sampleRing) add(t int64, v float64) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
r.buf[r.i] = sample{t: t, v: v}
|
r.buf[r.i] = s
|
||||||
r.l++
|
r.l++
|
||||||
|
|
||||||
// Free head of the buffer of samples that just fell out of the range.
|
// Free head of the buffer of samples that just fell out of the range.
|
||||||
tmin := t - r.delta
|
tmin := s.t - r.delta
|
||||||
for r.buf[r.f].t < tmin {
|
for r.buf[r.f].t < tmin {
|
||||||
r.f++
|
r.f++
|
||||||
if r.f >= l {
|
if r.f >= l {
|
||||||
|
@ -264,12 +352,11 @@ func (r *sampleRing) reduceDelta(delta int64) bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
// nthLast returns the nth most recent element added to the ring.
|
// nthLast returns the nth most recent element added to the ring.
|
||||||
func (r *sampleRing) nthLast(n int) (int64, float64, bool) {
|
func (r *sampleRing) nthLast(n int) (sample, bool) {
|
||||||
if n > r.l {
|
if n > r.l {
|
||||||
return 0, 0, false
|
return sample{}, false
|
||||||
}
|
}
|
||||||
t, v := r.at(r.l - n)
|
return r.at(r.l - n), true
|
||||||
return t, v, true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *sampleRing) samples() []sample {
|
func (r *sampleRing) samples() []sample {
|
||||||
|
|
|
@ -18,6 +18,9 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestSampleRing(t *testing.T) {
|
func TestSampleRing(t *testing.T) {
|
||||||
|
@ -64,7 +67,7 @@ func TestSampleRing(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, s := range input {
|
for i, s := range input {
|
||||||
r.add(s.t, s.v)
|
r.add(s)
|
||||||
buffered := r.samples()
|
buffered := r.samples()
|
||||||
|
|
||||||
for _, sold := range input[:i] {
|
for _, sold := range input[:i] {
|
||||||
|
@ -92,7 +95,7 @@ func TestBufferedSeriesIterator(t *testing.T) {
|
||||||
bufferEq := func(exp []sample) {
|
bufferEq := func(exp []sample) {
|
||||||
var b []sample
|
var b []sample
|
||||||
bit := it.Buffer()
|
bit := it.Buffer()
|
||||||
for bit.Next() {
|
for bit.Next() == chunkenc.ValFloat {
|
||||||
t, v := bit.At()
|
t, v := bit.At()
|
||||||
b = append(b, sample{t: t, v: v})
|
b = append(b, sample{t: t, v: v})
|
||||||
}
|
}
|
||||||
|
@ -104,7 +107,7 @@ func TestBufferedSeriesIterator(t *testing.T) {
|
||||||
require.Equal(t, ev, v, "value mismatch")
|
require.Equal(t, ev, v, "value mismatch")
|
||||||
}
|
}
|
||||||
prevSampleEq := func(ets int64, ev float64, eok bool) {
|
prevSampleEq := func(ets int64, ev float64, eok bool) {
|
||||||
ts, v, ok := it.PeekBack(1)
|
ts, v, _, ok := it.PeekBack(1)
|
||||||
require.Equal(t, eok, ok, "exist mismatch")
|
require.Equal(t, eok, ok, "exist mismatch")
|
||||||
require.Equal(t, ets, ts, "timestamp mismatch")
|
require.Equal(t, ets, ts, "timestamp mismatch")
|
||||||
require.Equal(t, ev, v, "value mismatch")
|
require.Equal(t, ev, v, "value mismatch")
|
||||||
|
@ -121,35 +124,35 @@ func TestBufferedSeriesIterator(t *testing.T) {
|
||||||
sample{t: 101, v: 10},
|
sample{t: 101, v: 10},
|
||||||
}), 2)
|
}), 2)
|
||||||
|
|
||||||
require.True(t, it.Seek(-123), "seek failed")
|
require.Equal(t, chunkenc.ValFloat, it.Seek(-123), "seek failed")
|
||||||
sampleEq(1, 2)
|
sampleEq(1, 2)
|
||||||
prevSampleEq(0, 0, false)
|
prevSampleEq(0, 0, false)
|
||||||
bufferEq(nil)
|
bufferEq(nil)
|
||||||
|
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, chunkenc.ValFloat, it.Next(), "next failed")
|
||||||
sampleEq(2, 3)
|
sampleEq(2, 3)
|
||||||
prevSampleEq(1, 2, true)
|
prevSampleEq(1, 2, true)
|
||||||
bufferEq([]sample{{t: 1, v: 2}})
|
bufferEq([]sample{{t: 1, v: 2}})
|
||||||
|
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, chunkenc.ValFloat, it.Next(), "next failed")
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, chunkenc.ValFloat, it.Next(), "next failed")
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, chunkenc.ValFloat, it.Next(), "next failed")
|
||||||
sampleEq(5, 6)
|
sampleEq(5, 6)
|
||||||
prevSampleEq(4, 5, true)
|
prevSampleEq(4, 5, true)
|
||||||
bufferEq([]sample{{t: 2, v: 3}, {t: 3, v: 4}, {t: 4, v: 5}})
|
bufferEq([]sample{{t: 2, v: 3}, {t: 3, v: 4}, {t: 4, v: 5}})
|
||||||
|
|
||||||
require.True(t, it.Seek(5), "seek failed")
|
require.Equal(t, chunkenc.ValFloat, it.Seek(5), "seek failed")
|
||||||
sampleEq(5, 6)
|
sampleEq(5, 6)
|
||||||
prevSampleEq(4, 5, true)
|
prevSampleEq(4, 5, true)
|
||||||
bufferEq([]sample{{t: 2, v: 3}, {t: 3, v: 4}, {t: 4, v: 5}})
|
bufferEq([]sample{{t: 2, v: 3}, {t: 3, v: 4}, {t: 4, v: 5}})
|
||||||
|
|
||||||
require.True(t, it.Seek(101), "seek failed")
|
require.Equal(t, chunkenc.ValFloat, it.Seek(101), "seek failed")
|
||||||
sampleEq(101, 10)
|
sampleEq(101, 10)
|
||||||
prevSampleEq(100, 9, true)
|
prevSampleEq(100, 9, true)
|
||||||
bufferEq([]sample{{t: 99, v: 8}, {t: 100, v: 9}})
|
bufferEq([]sample{{t: 99, v: 8}, {t: 100, v: 9}})
|
||||||
|
|
||||||
require.False(t, it.Next(), "next succeeded unexpectedly")
|
require.Equal(t, chunkenc.ValNone, it.Next(), "next succeeded unexpectedly")
|
||||||
require.False(t, it.Seek(1024), "seek succeeded unexpectedly")
|
require.Equal(t, chunkenc.ValNone, it.Seek(1024), "seek succeeded unexpectedly")
|
||||||
}
|
}
|
||||||
|
|
||||||
// At() should not be called once Next() returns false.
|
// At() should not be called once Next() returns false.
|
||||||
|
@ -157,14 +160,19 @@ func TestBufferedSeriesIteratorNoBadAt(t *testing.T) {
|
||||||
done := false
|
done := false
|
||||||
|
|
||||||
m := &mockSeriesIterator{
|
m := &mockSeriesIterator{
|
||||||
seek: func(int64) bool { return false },
|
seek: func(int64) chunkenc.ValueType { return chunkenc.ValNone },
|
||||||
at: func() (int64, float64) {
|
at: func() (int64, float64) {
|
||||||
require.False(t, done, "unexpectedly done")
|
require.False(t, done, "unexpectedly done")
|
||||||
done = true
|
done = true
|
||||||
return 0, 0
|
return 0, 0
|
||||||
},
|
},
|
||||||
next: func() bool { return !done },
|
next: func() chunkenc.ValueType {
|
||||||
err: func() error { return nil },
|
if done {
|
||||||
|
return chunkenc.ValNone
|
||||||
|
}
|
||||||
|
return chunkenc.ValFloat
|
||||||
|
},
|
||||||
|
err: func() error { return nil },
|
||||||
}
|
}
|
||||||
|
|
||||||
it := NewBufferIterator(m, 60)
|
it := NewBufferIterator(m, 60)
|
||||||
|
@ -180,23 +188,35 @@ func BenchmarkBufferedSeriesIterator(b *testing.B) {
|
||||||
b.ReportAllocs()
|
b.ReportAllocs()
|
||||||
b.ResetTimer()
|
b.ResetTimer()
|
||||||
|
|
||||||
for it.Next() {
|
for it.Next() != chunkenc.ValNone {
|
||||||
// scan everything
|
// scan everything
|
||||||
}
|
}
|
||||||
require.NoError(b, it.Err())
|
require.NoError(b, it.Err())
|
||||||
}
|
}
|
||||||
|
|
||||||
type mockSeriesIterator struct {
|
type mockSeriesIterator struct {
|
||||||
seek func(int64) bool
|
seek func(int64) chunkenc.ValueType
|
||||||
at func() (int64, float64)
|
at func() (int64, float64)
|
||||||
next func() bool
|
next func() chunkenc.ValueType
|
||||||
err func() error
|
err func() error
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *mockSeriesIterator) Seek(t int64) bool { return m.seek(t) }
|
func (m *mockSeriesIterator) Seek(t int64) chunkenc.ValueType { return m.seek(t) }
|
||||||
func (m *mockSeriesIterator) At() (int64, float64) { return m.at() }
|
func (m *mockSeriesIterator) At() (int64, float64) { return m.at() }
|
||||||
func (m *mockSeriesIterator) Next() bool { return m.next() }
|
func (m *mockSeriesIterator) Next() chunkenc.ValueType { return m.next() }
|
||||||
func (m *mockSeriesIterator) Err() error { return m.err() }
|
func (m *mockSeriesIterator) Err() error { return m.err() }
|
||||||
|
|
||||||
|
func (m *mockSeriesIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
return 0, nil // Not really mocked.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockSeriesIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
return 0, nil // Not really mocked.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockSeriesIterator) AtT() int64 {
|
||||||
|
return 0 // Not really mocked.
|
||||||
|
}
|
||||||
|
|
||||||
type fakeSeriesIterator struct {
|
type fakeSeriesIterator struct {
|
||||||
nsamples int64
|
nsamples int64
|
||||||
|
@ -209,17 +229,35 @@ func newFakeSeriesIterator(nsamples, step int64) *fakeSeriesIterator {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *fakeSeriesIterator) At() (int64, float64) {
|
func (it *fakeSeriesIterator) At() (int64, float64) {
|
||||||
return it.idx * it.step, 123 // value doesn't matter
|
return it.idx * it.step, 123 // Value doesn't matter.
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *fakeSeriesIterator) Next() bool {
|
func (it *fakeSeriesIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
return it.idx * it.step, &histogram.Histogram{} // Value doesn't matter.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (it *fakeSeriesIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
return it.idx * it.step, &histogram.FloatHistogram{} // Value doesn't matter.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (it *fakeSeriesIterator) AtT() int64 {
|
||||||
|
return it.idx * it.step
|
||||||
|
}
|
||||||
|
|
||||||
|
func (it *fakeSeriesIterator) Next() chunkenc.ValueType {
|
||||||
it.idx++
|
it.idx++
|
||||||
return it.idx < it.nsamples
|
if it.idx >= it.nsamples {
|
||||||
|
return chunkenc.ValNone
|
||||||
|
}
|
||||||
|
return chunkenc.ValFloat
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *fakeSeriesIterator) Seek(t int64) bool {
|
func (it *fakeSeriesIterator) Seek(t int64) chunkenc.ValueType {
|
||||||
it.idx = t / it.step
|
it.idx = t / it.step
|
||||||
return it.idx < it.nsamples
|
if it.idx >= it.nsamples {
|
||||||
|
return chunkenc.ValNone
|
||||||
|
}
|
||||||
|
return chunkenc.ValFloat
|
||||||
}
|
}
|
||||||
|
|
||||||
func (it *fakeSeriesIterator) Err() error { return nil }
|
func (it *fakeSeriesIterator) Err() error { return nil }
|
||||||
|
|
|
@ -21,6 +21,7 @@ import (
|
||||||
"github.com/prometheus/common/model"
|
"github.com/prometheus/common/model"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/metadata"
|
"github.com/prometheus/prometheus/model/metadata"
|
||||||
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
|
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
|
||||||
|
@ -173,6 +174,20 @@ func (f *fanoutAppender) AppendExemplar(ref SeriesRef, l labels.Labels, e exempl
|
||||||
return ref, nil
|
return ref, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (f *fanoutAppender) AppendHistogram(ref SeriesRef, l labels.Labels, t int64, h *histogram.Histogram) (SeriesRef, error) {
|
||||||
|
ref, err := f.primary.AppendHistogram(ref, l, t, h)
|
||||||
|
if err != nil {
|
||||||
|
return ref, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, appender := range f.secondaries {
|
||||||
|
if _, err := appender.AppendHistogram(ref, l, t, h); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ref, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (f *fanoutAppender) UpdateMetadata(ref SeriesRef, l labels.Labels, m metadata.Metadata) (SeriesRef, error) {
|
func (f *fanoutAppender) UpdateMetadata(ref SeriesRef, l labels.Labels, m metadata.Metadata) (SeriesRef, error) {
|
||||||
ref, err := f.primary.UpdateMetadata(ref, l, m)
|
ref, err := f.primary.UpdateMetadata(ref, l, m)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -23,6 +23,7 @@ import (
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/storage"
|
"github.com/prometheus/prometheus/storage"
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/util/teststorage"
|
"github.com/prometheus/prometheus/util/teststorage"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -90,7 +91,7 @@ func TestFanout_SelectSorted(t *testing.T) {
|
||||||
seriesLabels := series.Labels()
|
seriesLabels := series.Labels()
|
||||||
labelsResult = seriesLabels
|
labelsResult = seriesLabels
|
||||||
iterator := series.Iterator()
|
iterator := series.Iterator()
|
||||||
for iterator.Next() {
|
for iterator.Next() == chunkenc.ValFloat {
|
||||||
timestamp, value := iterator.At()
|
timestamp, value := iterator.At()
|
||||||
result[timestamp] = value
|
result[timestamp] = value
|
||||||
}
|
}
|
||||||
|
@ -116,7 +117,7 @@ func TestFanout_SelectSorted(t *testing.T) {
|
||||||
seriesLabels := series.Labels()
|
seriesLabels := series.Labels()
|
||||||
labelsResult = seriesLabels
|
labelsResult = seriesLabels
|
||||||
iterator := series.Iterator()
|
iterator := series.Iterator()
|
||||||
for iterator.Next() {
|
for iterator.Next() == chunkenc.ValFloat {
|
||||||
timestamp, value := iterator.At()
|
timestamp, value := iterator.At()
|
||||||
result[timestamp] = value
|
result[timestamp] = value
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,6 +19,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/prometheus/prometheus/model/exemplar"
|
"github.com/prometheus/prometheus/model/exemplar"
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/model/metadata"
|
"github.com/prometheus/prometheus/model/metadata"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
|
@ -35,11 +36,16 @@ var (
|
||||||
// ErrTooOldSample is when out of order support is enabled but the sample is outside the time window allowed.
|
// ErrTooOldSample is when out of order support is enabled but the sample is outside the time window allowed.
|
||||||
ErrTooOldSample = errors.New("too old sample")
|
ErrTooOldSample = errors.New("too old sample")
|
||||||
// ErrDuplicateSampleForTimestamp is when the sample has same timestamp but different value.
|
// ErrDuplicateSampleForTimestamp is when the sample has same timestamp but different value.
|
||||||
ErrDuplicateSampleForTimestamp = errors.New("duplicate sample for timestamp")
|
ErrDuplicateSampleForTimestamp = errors.New("duplicate sample for timestamp")
|
||||||
ErrOutOfOrderExemplar = errors.New("out of order exemplar")
|
ErrOutOfOrderExemplar = errors.New("out of order exemplar")
|
||||||
ErrDuplicateExemplar = errors.New("duplicate exemplar")
|
ErrDuplicateExemplar = errors.New("duplicate exemplar")
|
||||||
ErrExemplarLabelLength = fmt.Errorf("label length for exemplar exceeds maximum of %d UTF-8 characters", exemplar.ExemplarMaxLabelSetLength)
|
ErrExemplarLabelLength = fmt.Errorf("label length for exemplar exceeds maximum of %d UTF-8 characters", exemplar.ExemplarMaxLabelSetLength)
|
||||||
ErrExemplarsDisabled = fmt.Errorf("exemplar storage is disabled or max exemplars is less than or equal to 0")
|
ErrExemplarsDisabled = fmt.Errorf("exemplar storage is disabled or max exemplars is less than or equal to 0")
|
||||||
|
ErrNativeHistogramsDisabled = fmt.Errorf("native histograms are disabled")
|
||||||
|
ErrHistogramCountNotBigEnough = errors.New("histogram's observation count should be at least the number of observations found in the buckets")
|
||||||
|
ErrHistogramNegativeBucketCount = errors.New("histogram has a bucket whose observation count is negative")
|
||||||
|
ErrHistogramSpanNegativeOffset = errors.New("histogram has a span whose offset is negative")
|
||||||
|
ErrHistogramSpansBucketsMismatch = errors.New("histogram spans specify different number of buckets than provided")
|
||||||
)
|
)
|
||||||
|
|
||||||
// SeriesRef is a generic series reference. In prometheus it is either a
|
// SeriesRef is a generic series reference. In prometheus it is either a
|
||||||
|
@ -210,6 +216,9 @@ func (f QueryableFunc) Querier(ctx context.Context, mint, maxt int64) (Querier,
|
||||||
// It must be completed with a call to Commit or Rollback and must not be reused afterwards.
|
// It must be completed with a call to Commit or Rollback and must not be reused afterwards.
|
||||||
//
|
//
|
||||||
// Operations on the Appender interface are not goroutine-safe.
|
// Operations on the Appender interface are not goroutine-safe.
|
||||||
|
//
|
||||||
|
// The type of samples (float64, histogram, etc) appended for a given series must remain same within an Appender.
|
||||||
|
// The behaviour is undefined if samples of different types are appended to the same series in a single Commit().
|
||||||
type Appender interface {
|
type Appender interface {
|
||||||
// Append adds a sample pair for the given series.
|
// Append adds a sample pair for the given series.
|
||||||
// An optional series reference can be provided to accelerate calls.
|
// An optional series reference can be provided to accelerate calls.
|
||||||
|
@ -230,7 +239,9 @@ type Appender interface {
|
||||||
// Rollback rolls back all modifications made in the appender so far.
|
// Rollback rolls back all modifications made in the appender so far.
|
||||||
// Appender has to be discarded after rollback.
|
// Appender has to be discarded after rollback.
|
||||||
Rollback() error
|
Rollback() error
|
||||||
|
|
||||||
ExemplarAppender
|
ExemplarAppender
|
||||||
|
HistogramAppender
|
||||||
MetadataUpdater
|
MetadataUpdater
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -240,7 +251,8 @@ type GetRef interface {
|
||||||
// Returns reference number that can be used to pass to Appender.Append(),
|
// Returns reference number that can be used to pass to Appender.Append(),
|
||||||
// and a set of labels that will not cause another copy when passed to Appender.Append().
|
// and a set of labels that will not cause another copy when passed to Appender.Append().
|
||||||
// 0 means the appender does not have a reference to this series.
|
// 0 means the appender does not have a reference to this series.
|
||||||
GetRef(lset labels.Labels) (SeriesRef, labels.Labels)
|
// hash should be a hash of lset.
|
||||||
|
GetRef(lset labels.Labels, hash uint64) (SeriesRef, labels.Labels)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ExemplarAppender provides an interface for adding samples to exemplar storage, which
|
// ExemplarAppender provides an interface for adding samples to exemplar storage, which
|
||||||
|
@ -260,6 +272,22 @@ type ExemplarAppender interface {
|
||||||
AppendExemplar(ref SeriesRef, l labels.Labels, e exemplar.Exemplar) (SeriesRef, error)
|
AppendExemplar(ref SeriesRef, l labels.Labels, e exemplar.Exemplar) (SeriesRef, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HistogramAppender provides an interface for appending histograms to the storage.
|
||||||
|
type HistogramAppender interface {
|
||||||
|
// AppendHistogram adds a histogram for the given series labels. An
|
||||||
|
// optional reference number can be provided to accelerate calls. A
|
||||||
|
// reference number is returned which can be used to add further
|
||||||
|
// histograms in the same or later transactions. Returned reference
|
||||||
|
// numbers are ephemeral and may be rejected in calls to Append() at any
|
||||||
|
// point. Adding the sample via Append() returns a new reference number.
|
||||||
|
// If the reference is 0, it must not be used for caching.
|
||||||
|
//
|
||||||
|
// For efficiency reasons, the histogram is passed as a
|
||||||
|
// pointer. AppendHistogram won't mutate the histogram, but in turn
|
||||||
|
// depends on the caller to not mutate it either.
|
||||||
|
AppendHistogram(ref SeriesRef, l labels.Labels, t int64, h *histogram.Histogram) (SeriesRef, error)
|
||||||
|
}
|
||||||
|
|
||||||
// MetadataUpdater provides an interface for associating metadata to stored series.
|
// MetadataUpdater provides an interface for associating metadata to stored series.
|
||||||
type MetadataUpdater interface {
|
type MetadataUpdater interface {
|
||||||
// UpdateMetadata updates a metadata entry for the given series and labels.
|
// UpdateMetadata updates a metadata entry for the given series and labels.
|
||||||
|
|
|
@ -16,6 +16,7 @@ package storage
|
||||||
import (
|
import (
|
||||||
"math"
|
"math"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -24,12 +25,18 @@ type MemoizedSeriesIterator struct {
|
||||||
it chunkenc.Iterator
|
it chunkenc.Iterator
|
||||||
delta int64
|
delta int64
|
||||||
|
|
||||||
lastTime int64
|
lastTime int64
|
||||||
ok bool
|
valueType chunkenc.ValueType
|
||||||
|
|
||||||
// Keep track of the previously returned value.
|
// Keep track of the previously returned value.
|
||||||
prevTime int64
|
prevTime int64
|
||||||
prevValue float64
|
prevValue float64
|
||||||
|
prevHistogram *histogram.Histogram
|
||||||
|
prevFloatHistogram *histogram.FloatHistogram
|
||||||
|
// TODO(beorn7): MemoizedSeriesIterator is currently only used by the
|
||||||
|
// PromQL engine, which only works with FloatHistograms. For better
|
||||||
|
// performance, we could change MemoizedSeriesIterator to also only
|
||||||
|
// handle FloatHistograms.
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMemoizedEmptyIterator is like NewMemoizedIterator but it's initialised with an empty iterator.
|
// NewMemoizedEmptyIterator is like NewMemoizedIterator but it's initialised with an empty iterator.
|
||||||
|
@ -53,70 +60,93 @@ func NewMemoizedIterator(it chunkenc.Iterator, delta int64) *MemoizedSeriesItera
|
||||||
func (b *MemoizedSeriesIterator) Reset(it chunkenc.Iterator) {
|
func (b *MemoizedSeriesIterator) Reset(it chunkenc.Iterator) {
|
||||||
b.it = it
|
b.it = it
|
||||||
b.lastTime = math.MinInt64
|
b.lastTime = math.MinInt64
|
||||||
b.ok = true
|
|
||||||
b.prevTime = math.MinInt64
|
b.prevTime = math.MinInt64
|
||||||
it.Next()
|
b.valueType = it.Next()
|
||||||
}
|
}
|
||||||
|
|
||||||
// PeekPrev returns the previous element of the iterator. If there is none buffered,
|
// PeekPrev returns the previous element of the iterator. If there is none buffered,
|
||||||
// ok is false.
|
// ok is false.
|
||||||
func (b *MemoizedSeriesIterator) PeekPrev() (t int64, v float64, ok bool) {
|
func (b *MemoizedSeriesIterator) PeekPrev() (t int64, v float64, h *histogram.Histogram, fh *histogram.FloatHistogram, ok bool) {
|
||||||
if b.prevTime == math.MinInt64 {
|
if b.prevTime == math.MinInt64 {
|
||||||
return 0, 0, false
|
return 0, 0, nil, nil, false
|
||||||
}
|
}
|
||||||
return b.prevTime, b.prevValue, true
|
return b.prevTime, b.prevValue, b.prevHistogram, b.prevFloatHistogram, true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Seek advances the iterator to the element at time t or greater.
|
// Seek advances the iterator to the element at time t or greater.
|
||||||
func (b *MemoizedSeriesIterator) Seek(t int64) bool {
|
func (b *MemoizedSeriesIterator) Seek(t int64) chunkenc.ValueType {
|
||||||
t0 := t - b.delta
|
t0 := t - b.delta
|
||||||
|
|
||||||
if b.ok && t0 > b.lastTime {
|
if b.valueType != chunkenc.ValNone && t0 > b.lastTime {
|
||||||
// Reset the previously stored element because the seek advanced
|
// Reset the previously stored element because the seek advanced
|
||||||
// more than the delta.
|
// more than the delta.
|
||||||
b.prevTime = math.MinInt64
|
b.prevTime = math.MinInt64
|
||||||
|
|
||||||
b.ok = b.it.Seek(t0)
|
b.valueType = b.it.Seek(t0)
|
||||||
if !b.ok {
|
if b.valueType == chunkenc.ValNone {
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
b.lastTime, _ = b.it.At()
|
b.lastTime = b.it.AtT()
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.lastTime >= t {
|
if b.lastTime >= t {
|
||||||
return true
|
return b.valueType
|
||||||
}
|
}
|
||||||
for b.Next() {
|
for b.Next() != chunkenc.ValNone {
|
||||||
if b.lastTime >= t {
|
if b.lastTime >= t {
|
||||||
return true
|
return b.valueType
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
// Next advances the iterator to the next element.
|
// Next advances the iterator to the next element.
|
||||||
func (b *MemoizedSeriesIterator) Next() bool {
|
func (b *MemoizedSeriesIterator) Next() chunkenc.ValueType {
|
||||||
if !b.ok {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Keep track of the previous element.
|
// Keep track of the previous element.
|
||||||
b.prevTime, b.prevValue = b.it.At()
|
switch b.valueType {
|
||||||
|
case chunkenc.ValNone:
|
||||||
b.ok = b.it.Next()
|
return chunkenc.ValNone
|
||||||
if b.ok {
|
case chunkenc.ValFloat:
|
||||||
b.lastTime, _ = b.it.At()
|
b.prevTime, b.prevValue = b.it.At()
|
||||||
|
b.prevHistogram = nil
|
||||||
|
b.prevFloatHistogram = nil
|
||||||
|
case chunkenc.ValHistogram:
|
||||||
|
b.prevValue = 0
|
||||||
|
b.prevTime, b.prevHistogram = b.it.AtHistogram()
|
||||||
|
_, b.prevFloatHistogram = b.it.AtFloatHistogram()
|
||||||
|
case chunkenc.ValFloatHistogram:
|
||||||
|
b.prevValue = 0
|
||||||
|
b.prevHistogram = nil
|
||||||
|
b.prevTime, b.prevFloatHistogram = b.it.AtFloatHistogram()
|
||||||
}
|
}
|
||||||
|
|
||||||
return b.ok
|
b.valueType = b.it.Next()
|
||||||
|
if b.valueType != chunkenc.ValNone {
|
||||||
|
b.lastTime = b.it.AtT()
|
||||||
|
}
|
||||||
|
return b.valueType
|
||||||
}
|
}
|
||||||
|
|
||||||
// At returns the current element of the iterator.
|
// At returns the current float element of the iterator.
|
||||||
func (b *MemoizedSeriesIterator) At() (int64, float64) {
|
func (b *MemoizedSeriesIterator) At() (int64, float64) {
|
||||||
return b.it.At()
|
return b.it.At()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AtHistogram returns the current histogram element of the iterator.
|
||||||
|
func (b *MemoizedSeriesIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
return b.it.AtHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
// AtFloatHistogram returns the current float-histogram element of the iterator.
|
||||||
|
func (b *MemoizedSeriesIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
return b.it.AtFloatHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
// AtT returns the current timestamp of the iterator.
|
||||||
|
func (b *MemoizedSeriesIterator) AtT() int64 {
|
||||||
|
return b.it.AtT()
|
||||||
|
}
|
||||||
|
|
||||||
// Err returns the last encountered error.
|
// Err returns the last encountered error.
|
||||||
func (b *MemoizedSeriesIterator) Err() error {
|
func (b *MemoizedSeriesIterator) Err() error {
|
||||||
return b.it.Err()
|
return b.it.Err()
|
||||||
|
|
|
@ -17,9 +17,12 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestMemoizedSeriesIterator(t *testing.T) {
|
func TestMemoizedSeriesIterator(t *testing.T) {
|
||||||
|
// TODO(beorn7): Include histograms in testing.
|
||||||
var it *MemoizedSeriesIterator
|
var it *MemoizedSeriesIterator
|
||||||
|
|
||||||
sampleEq := func(ets int64, ev float64) {
|
sampleEq := func(ets int64, ev float64) {
|
||||||
|
@ -28,7 +31,7 @@ func TestMemoizedSeriesIterator(t *testing.T) {
|
||||||
require.Equal(t, ev, v, "value mismatch")
|
require.Equal(t, ev, v, "value mismatch")
|
||||||
}
|
}
|
||||||
prevSampleEq := func(ets int64, ev float64, eok bool) {
|
prevSampleEq := func(ets int64, ev float64, eok bool) {
|
||||||
ts, v, ok := it.PeekPrev()
|
ts, v, _, _, ok := it.PeekPrev()
|
||||||
require.Equal(t, eok, ok, "exist mismatch")
|
require.Equal(t, eok, ok, "exist mismatch")
|
||||||
require.Equal(t, ets, ts, "timestamp mismatch")
|
require.Equal(t, ets, ts, "timestamp mismatch")
|
||||||
require.Equal(t, ev, v, "value mismatch")
|
require.Equal(t, ev, v, "value mismatch")
|
||||||
|
@ -45,30 +48,30 @@ func TestMemoizedSeriesIterator(t *testing.T) {
|
||||||
sample{t: 101, v: 10},
|
sample{t: 101, v: 10},
|
||||||
}), 2)
|
}), 2)
|
||||||
|
|
||||||
require.True(t, it.Seek(-123), "seek failed")
|
require.Equal(t, it.Seek(-123), chunkenc.ValFloat, "seek failed")
|
||||||
sampleEq(1, 2)
|
sampleEq(1, 2)
|
||||||
prevSampleEq(0, 0, false)
|
prevSampleEq(0, 0, false)
|
||||||
|
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, it.Next(), chunkenc.ValFloat, "next failed")
|
||||||
sampleEq(2, 3)
|
sampleEq(2, 3)
|
||||||
prevSampleEq(1, 2, true)
|
prevSampleEq(1, 2, true)
|
||||||
|
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, it.Next(), chunkenc.ValFloat, "next failed")
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, it.Next(), chunkenc.ValFloat, "next failed")
|
||||||
require.True(t, it.Next(), "next failed")
|
require.Equal(t, it.Next(), chunkenc.ValFloat, "next failed")
|
||||||
sampleEq(5, 6)
|
sampleEq(5, 6)
|
||||||
prevSampleEq(4, 5, true)
|
prevSampleEq(4, 5, true)
|
||||||
|
|
||||||
require.True(t, it.Seek(5), "seek failed")
|
require.Equal(t, it.Seek(5), chunkenc.ValFloat, "seek failed")
|
||||||
sampleEq(5, 6)
|
sampleEq(5, 6)
|
||||||
prevSampleEq(4, 5, true)
|
prevSampleEq(4, 5, true)
|
||||||
|
|
||||||
require.True(t, it.Seek(101), "seek failed")
|
require.Equal(t, it.Seek(101), chunkenc.ValFloat, "seek failed")
|
||||||
sampleEq(101, 10)
|
sampleEq(101, 10)
|
||||||
prevSampleEq(100, 9, true)
|
prevSampleEq(100, 9, true)
|
||||||
|
|
||||||
require.False(t, it.Next(), "next succeeded unexpectedly")
|
require.Equal(t, it.Next(), chunkenc.ValNone, "next succeeded unexpectedly")
|
||||||
require.False(t, it.Seek(1024), "seek succeeded unexpectedly")
|
require.Equal(t, it.Seek(1024), chunkenc.ValNone, "seek succeeded unexpectedly")
|
||||||
}
|
}
|
||||||
|
|
||||||
func BenchmarkMemoizedSeriesIterator(b *testing.B) {
|
func BenchmarkMemoizedSeriesIterator(b *testing.B) {
|
||||||
|
@ -79,7 +82,7 @@ func BenchmarkMemoizedSeriesIterator(b *testing.B) {
|
||||||
b.ReportAllocs()
|
b.ReportAllocs()
|
||||||
b.ResetTimer()
|
b.ResetTimer()
|
||||||
|
|
||||||
for it.Next() {
|
for it.Next() != chunkenc.ValNone {
|
||||||
// scan everything
|
// scan everything
|
||||||
}
|
}
|
||||||
require.NoError(b, it.Err())
|
require.NoError(b, it.Err())
|
||||||
|
|
|
@ -22,6 +22,7 @@ import (
|
||||||
|
|
||||||
"golang.org/x/exp/slices"
|
"golang.org/x/exp/slices"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunks"
|
"github.com/prometheus/prometheus/tsdb/chunks"
|
||||||
|
@ -442,7 +443,7 @@ type chainSampleIterator struct {
|
||||||
h samplesIteratorHeap
|
h samplesIteratorHeap
|
||||||
|
|
||||||
curr chunkenc.Iterator
|
curr chunkenc.Iterator
|
||||||
lastt int64
|
lastT int64
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewChainSampleIterator returns a single iterator that iterates over the samples from the given iterators in a sorted
|
// NewChainSampleIterator returns a single iterator that iterates over the samples from the given iterators in a sorted
|
||||||
|
@ -452,60 +453,82 @@ func NewChainSampleIterator(iterators []chunkenc.Iterator) chunkenc.Iterator {
|
||||||
return &chainSampleIterator{
|
return &chainSampleIterator{
|
||||||
iterators: iterators,
|
iterators: iterators,
|
||||||
h: nil,
|
h: nil,
|
||||||
lastt: math.MinInt64,
|
lastT: math.MinInt64,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *chainSampleIterator) Seek(t int64) bool {
|
func (c *chainSampleIterator) Seek(t int64) chunkenc.ValueType {
|
||||||
// No-op check
|
// No-op check.
|
||||||
if c.curr != nil && c.lastt >= t {
|
if c.curr != nil && c.lastT >= t {
|
||||||
return true
|
return c.curr.Seek(c.lastT)
|
||||||
}
|
}
|
||||||
|
|
||||||
c.h = samplesIteratorHeap{}
|
c.h = samplesIteratorHeap{}
|
||||||
for _, iter := range c.iterators {
|
for _, iter := range c.iterators {
|
||||||
if iter.Seek(t) {
|
if iter.Seek(t) != chunkenc.ValNone {
|
||||||
heap.Push(&c.h, iter)
|
heap.Push(&c.h, iter)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if len(c.h) > 0 {
|
if len(c.h) > 0 {
|
||||||
c.curr = heap.Pop(&c.h).(chunkenc.Iterator)
|
c.curr = heap.Pop(&c.h).(chunkenc.Iterator)
|
||||||
c.lastt, _ = c.curr.At()
|
c.lastT = c.curr.AtT()
|
||||||
return true
|
return c.curr.Seek(c.lastT)
|
||||||
}
|
}
|
||||||
c.curr = nil
|
c.curr = nil
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *chainSampleIterator) At() (t int64, v float64) {
|
func (c *chainSampleIterator) At() (t int64, v float64) {
|
||||||
if c.curr == nil {
|
if c.curr == nil {
|
||||||
panic("chainSampleIterator.At() called before first .Next() or after .Next() returned false.")
|
panic("chainSampleIterator.At called before first .Next or after .Next returned false.")
|
||||||
}
|
}
|
||||||
return c.curr.At()
|
return c.curr.At()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *chainSampleIterator) Next() bool {
|
func (c *chainSampleIterator) AtHistogram() (int64, *histogram.Histogram) {
|
||||||
|
if c.curr == nil {
|
||||||
|
panic("chainSampleIterator.AtHistogram called before first .Next or after .Next returned false.")
|
||||||
|
}
|
||||||
|
return c.curr.AtHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *chainSampleIterator) AtFloatHistogram() (int64, *histogram.FloatHistogram) {
|
||||||
|
if c.curr == nil {
|
||||||
|
panic("chainSampleIterator.AtFloatHistogram called before first .Next or after .Next returned false.")
|
||||||
|
}
|
||||||
|
return c.curr.AtFloatHistogram()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *chainSampleIterator) AtT() int64 {
|
||||||
|
if c.curr == nil {
|
||||||
|
panic("chainSampleIterator.AtT called before first .Next or after .Next returned false.")
|
||||||
|
}
|
||||||
|
return c.curr.AtT()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *chainSampleIterator) Next() chunkenc.ValueType {
|
||||||
if c.h == nil {
|
if c.h == nil {
|
||||||
c.h = samplesIteratorHeap{}
|
c.h = samplesIteratorHeap{}
|
||||||
// We call c.curr.Next() as the first thing below.
|
// We call c.curr.Next() as the first thing below.
|
||||||
// So, we don't call Next() on it here.
|
// So, we don't call Next() on it here.
|
||||||
c.curr = c.iterators[0]
|
c.curr = c.iterators[0]
|
||||||
for _, iter := range c.iterators[1:] {
|
for _, iter := range c.iterators[1:] {
|
||||||
if iter.Next() {
|
if iter.Next() != chunkenc.ValNone {
|
||||||
heap.Push(&c.h, iter)
|
heap.Push(&c.h, iter)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.curr == nil {
|
if c.curr == nil {
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
var currt int64
|
var currT int64
|
||||||
|
var currValueType chunkenc.ValueType
|
||||||
for {
|
for {
|
||||||
if c.curr.Next() {
|
currValueType = c.curr.Next()
|
||||||
currt, _ = c.curr.At()
|
if currValueType != chunkenc.ValNone {
|
||||||
if currt == c.lastt {
|
currT = c.curr.AtT()
|
||||||
|
if currT == c.lastT {
|
||||||
// Ignoring sample for the same timestamp.
|
// Ignoring sample for the same timestamp.
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
@ -516,7 +539,8 @@ func (c *chainSampleIterator) Next() bool {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check current iterator with the top of the heap.
|
// Check current iterator with the top of the heap.
|
||||||
if nextt, _ := c.h[0].At(); currt < nextt {
|
nextT := c.h[0].AtT()
|
||||||
|
if currT < nextT {
|
||||||
// Current iterator has smaller timestamp than the heap.
|
// Current iterator has smaller timestamp than the heap.
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
@ -525,18 +549,19 @@ func (c *chainSampleIterator) Next() bool {
|
||||||
} else if len(c.h) == 0 {
|
} else if len(c.h) == 0 {
|
||||||
// No iterator left to iterate.
|
// No iterator left to iterate.
|
||||||
c.curr = nil
|
c.curr = nil
|
||||||
return false
|
return chunkenc.ValNone
|
||||||
}
|
}
|
||||||
|
|
||||||
c.curr = heap.Pop(&c.h).(chunkenc.Iterator)
|
c.curr = heap.Pop(&c.h).(chunkenc.Iterator)
|
||||||
currt, _ = c.curr.At()
|
currT = c.curr.AtT()
|
||||||
if currt != c.lastt {
|
currValueType = c.curr.Seek(currT)
|
||||||
|
if currT != c.lastT {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
c.lastt = currt
|
c.lastT = currT
|
||||||
return true
|
return currValueType
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *chainSampleIterator) Err() error {
|
func (c *chainSampleIterator) Err() error {
|
||||||
|
@ -553,9 +578,7 @@ func (h samplesIteratorHeap) Len() int { return len(h) }
|
||||||
func (h samplesIteratorHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
func (h samplesIteratorHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
||||||
|
|
||||||
func (h samplesIteratorHeap) Less(i, j int) bool {
|
func (h samplesIteratorHeap) Less(i, j int) bool {
|
||||||
at, _ := h[i].At()
|
return h[i].AtT() < h[j].AtT()
|
||||||
bt, _ := h[j].At()
|
|
||||||
return at < bt
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *samplesIteratorHeap) Push(x interface{}) {
|
func (h *samplesIteratorHeap) Push(x interface{}) {
|
||||||
|
|
|
@ -23,6 +23,7 @@ import (
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/prometheus/prometheus/model/histogram"
|
||||||
"github.com/prometheus/prometheus/model/labels"
|
"github.com/prometheus/prometheus/model/labels"
|
||||||
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
"github.com/prometheus/prometheus/tsdb/chunkenc"
|
||||||
"github.com/prometheus/prometheus/tsdb/tsdbutil"
|
"github.com/prometheus/prometheus/tsdb/tsdbutil"
|
||||||
|
@ -62,116 +63,116 @@ func TestMergeQuerierWithChainMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "one querier, two series",
|
name: "one querier, two series",
|
||||||
querierSeries: [][]Series{{
|
querierSeries: [][]Series{{
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two queriers, one different series each",
|
name: "two queriers, one different series each",
|
||||||
querierSeries: [][]Series{{
|
querierSeries: [][]Series{{
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two time unsorted queriers, two series each",
|
name: "two time unsorted queriers, two series each",
|
||||||
querierSeries: [][]Series{{
|
querierSeries: [][]Series{{
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}, sample{6, 6}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}, sample{4, 4}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("bar", "baz"),
|
labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}, sample{6, 6}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("foo", "bar"),
|
labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "five queriers, only two queriers have two time unsorted series each",
|
name: "five queriers, only two queriers have two time unsorted series each",
|
||||||
querierSeries: [][]Series{{}, {}, {
|
querierSeries: [][]Series{{}, {}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}, sample{6, 6}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}, sample{4, 4}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
}, {}},
|
}, {}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("bar", "baz"),
|
labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}, sample{6, 6}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("foo", "bar"),
|
labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two queriers, only two queriers have two time unsorted series each, with 3 noop and one nil querier together",
|
name: "two queriers, only two queriers have two time unsorted series each, with 3 noop and one nil querier together",
|
||||||
querierSeries: [][]Series{{}, {}, {
|
querierSeries: [][]Series{{}, {}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}, sample{6, 6}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}, sample{4, 4}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
}, {}},
|
}, {}},
|
||||||
extraQueriers: []Querier{NoopQuerier(), NoopQuerier(), nil, NoopQuerier()},
|
extraQueriers: []Querier{NoopQuerier(), NoopQuerier(), nil, NoopQuerier()},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("bar", "baz"),
|
labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}, sample{6, 6}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("foo", "bar"),
|
labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two queriers, with two series, one is overlapping",
|
name: "two queriers, with two series, one is overlapping",
|
||||||
querierSeries: [][]Series{{}, {}, {
|
querierSeries: [][]Series{{}, {}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 21}, sample{3, 31}, sample{5, 5}, sample{6, 6}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 21, nil, nil}, sample{3, 31, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 22}, sample{3, 32}}),
|
NewListSeries(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 22, nil, nil}, sample{3, 32, nil, nil}}),
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}, sample{4, 4}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
}, {}},
|
}, {}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("bar", "baz"),
|
labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 21}, sample{3, 31}, sample{5, 5}, sample{6, 6}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 21, nil, nil}, sample{3, 31, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListSeries(
|
NewListSeries(
|
||||||
labels.FromStrings("foo", "bar"),
|
labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two queries, one with NaN samples series",
|
name: "two queries, one with NaN samples series",
|
||||||
querierSeries: [][]Series{{
|
querierSeries: [][]Series{{
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN()}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN(), nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{1, 1}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{1, 1, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockSeriesSet(
|
expected: NewMockSeriesSet(
|
||||||
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN()}, sample{1, 1}}),
|
NewListSeries(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN(), nil, nil}, sample{1, 1, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
} {
|
} {
|
||||||
|
@ -245,108 +246,108 @@ func TestMergeChunkQuerierWithNoVerticalChunkSeriesMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "one querier, two series",
|
name: "one querier, two series",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{
|
chkQuerierSeries: [][]ChunkSeries{{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two secondaries, one different series each",
|
name: "two secondaries, one different series each",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{
|
chkQuerierSeries: [][]ChunkSeries{{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two secondaries, two not in time order series each",
|
name: "two secondaries, two not in time order series each",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{
|
chkQuerierSeries: [][]ChunkSeries{{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}}, []tsdbutil.Sample{sample{6, 6}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{6, 6, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}}, []tsdbutil.Sample{sample{4, 4}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}}, []tsdbutil.Sample{sample{4, 4, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{5, 5}},
|
[]tsdbutil.Sample{sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{6, 6}},
|
[]tsdbutil.Sample{sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{4, 4}},
|
[]tsdbutil.Sample{sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "five secondaries, only two have two not in time order series each",
|
name: "five secondaries, only two have two not in time order series each",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{}, {}, {
|
chkQuerierSeries: [][]ChunkSeries{{}, {}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}}, []tsdbutil.Sample{sample{6, 6}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{6, 6, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}}, []tsdbutil.Sample{sample{4, 4}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}}, []tsdbutil.Sample{sample{4, 4, nil, nil}}),
|
||||||
}, {}},
|
}, {}},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{5, 5}},
|
[]tsdbutil.Sample{sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{6, 6}},
|
[]tsdbutil.Sample{sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{4, 4}},
|
[]tsdbutil.Sample{sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two secondaries, with two not in time order series each, with 3 noop queries and one nil together",
|
name: "two secondaries, with two not in time order series each, with 3 noop queries and one nil together",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{
|
chkQuerierSeries: [][]ChunkSeries{{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5}}, []tsdbutil.Sample{sample{6, 6}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{6, 6, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}}, []tsdbutil.Sample{sample{2, 2}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}, []tsdbutil.Sample{sample{2, 2, nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3}}, []tsdbutil.Sample{sample{4, 4}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{3, 3, nil, nil}}, []tsdbutil.Sample{sample{4, 4, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
extraQueriers: []ChunkQuerier{NoopChunkedQuerier(), NoopChunkedQuerier(), nil, NoopChunkedQuerier()},
|
extraQueriers: []ChunkQuerier{NoopChunkedQuerier(), NoopChunkedQuerier(), nil, NoopChunkedQuerier()},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{5, 5}},
|
[]tsdbutil.Sample{sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{6, 6}},
|
[]tsdbutil.Sample{sample{6, 6, nil, nil}},
|
||||||
),
|
),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{1, 1}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{3, 3}},
|
[]tsdbutil.Sample{sample{3, 3, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{4, 4}},
|
[]tsdbutil.Sample{sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two queries, one with NaN samples series",
|
name: "two queries, one with NaN samples series",
|
||||||
chkQuerierSeries: [][]ChunkSeries{{
|
chkQuerierSeries: [][]ChunkSeries{{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN()}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN(), nil, nil}}),
|
||||||
}, {
|
}, {
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{1, 1}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{1, 1, nil, nil}}),
|
||||||
}},
|
}},
|
||||||
expected: NewMockChunkSeriesSet(
|
expected: NewMockChunkSeriesSet(
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN()}}, []tsdbutil.Sample{sample{1, 1}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("foo", "bar"), []tsdbutil.Sample{sample{0, math.NaN(), nil, nil}}, []tsdbutil.Sample{sample{1, 1, nil, nil}}),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
} {
|
} {
|
||||||
|
@ -384,6 +385,22 @@ func TestMergeChunkQuerierWithNoVerticalChunkSeriesMerger(t *testing.T) {
|
||||||
func TestCompactingChunkSeriesMerger(t *testing.T) {
|
func TestCompactingChunkSeriesMerger(t *testing.T) {
|
||||||
m := NewCompactingChunkSeriesMerger(ChainedSeriesMerge)
|
m := NewCompactingChunkSeriesMerger(ChainedSeriesMerge)
|
||||||
|
|
||||||
|
// histogramSample returns a histogram that is unique to the ts.
|
||||||
|
histogramSample := func(ts int64) sample {
|
||||||
|
idx := ts + 1
|
||||||
|
return sample{t: ts, h: &histogram.Histogram{
|
||||||
|
Schema: 2,
|
||||||
|
ZeroThreshold: 0.001,
|
||||||
|
ZeroCount: 2 * uint64(idx),
|
||||||
|
Count: 5 * uint64(idx),
|
||||||
|
Sum: 12.34 * float64(idx),
|
||||||
|
PositiveSpans: []histogram.Span{{Offset: 1, Length: 2}, {Offset: 2, Length: 1}},
|
||||||
|
NegativeSpans: []histogram.Span{{Offset: 2, Length: 1}, {Offset: 1, Length: 2}},
|
||||||
|
PositiveBuckets: []int64{1 * idx, -1 * idx, 3 * idx},
|
||||||
|
NegativeBuckets: []int64{1 * idx, 2 * idx, 3 * idx},
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
for _, tc := range []struct {
|
for _, tc := range []struct {
|
||||||
name string
|
name string
|
||||||
input []ChunkSeries
|
input []ChunkSeries
|
||||||
|
@ -399,9 +416,9 @@ func TestCompactingChunkSeriesMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "single series",
|
name: "single series",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two empty series",
|
name: "two empty series",
|
||||||
|
@ -414,55 +431,55 @@ func TestCompactingChunkSeriesMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "two non overlapping",
|
name: "two non overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{5, 5}}, []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two overlapping",
|
name: "two overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{8, 8}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{8, 8, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{7, 7}, sample{8, 8}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{7, 7, nil, nil}, sample{8, 8, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two duplicated",
|
name: "two duplicated",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three overlapping",
|
name: "three overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{6, 6}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0}, sample{4, 4}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}, sample{5, 5}, sample{6, 6}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}, sample{5, 5, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three in chained overlap",
|
name: "three in chained overlap",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{4, 4}, sample{6, 66}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{4, 4, nil, nil}, sample{6, 66, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{6, 6}, sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{6, 6, nil, nil}, sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}, sample{5, 5}, sample{6, 66}, sample{10, 10}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}, sample{5, 5, nil, nil}, sample{6, 66, nil, nil}, sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three in chained overlap complex",
|
name: "three in chained overlap complex",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0}, sample{5, 5}}, []tsdbutil.Sample{sample{10, 10}, sample{15, 15}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}, sample{15, 15, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{20, 20}}, []tsdbutil.Sample{sample{25, 25}, sample{30, 30}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{20, 20, nil, nil}}, []tsdbutil.Sample{sample{25, 25, nil, nil}, sample{30, 30, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{18, 18}, sample{26, 26}}, []tsdbutil.Sample{sample{31, 31}, sample{35, 35}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{18, 18, nil, nil}, sample{26, 26, nil, nil}}, []tsdbutil.Sample{sample{31, 31, nil, nil}, sample{35, 35, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{2, 2}, sample{5, 5}, sample{10, 10}, sample{15, 15}, sample{18, 18}, sample{20, 20}, sample{25, 25}, sample{26, 26}, sample{30, 30}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{2, 2, nil, nil}, sample{5, 5, nil, nil}, sample{10, 10, nil, nil}, sample{15, 15, nil, nil}, sample{18, 18, nil, nil}, sample{20, 20, nil, nil}, sample{25, 25, nil, nil}, sample{26, 26, nil, nil}, sample{30, 30, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{31, 31}, sample{35, 35}},
|
[]tsdbutil.Sample{sample{31, 31, nil, nil}, sample{35, 35, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -486,6 +503,32 @@ func TestCompactingChunkSeriesMerger(t *testing.T) {
|
||||||
tsdbutil.GenerateSamples(120, 30),
|
tsdbutil.GenerateSamples(120, 30),
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "histogram chunks overlapping",
|
||||||
|
input: []ChunkSeries{
|
||||||
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{histogramSample(0), histogramSample(5)}, []tsdbutil.Sample{histogramSample(10), histogramSample(15)}),
|
||||||
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{histogramSample(2), histogramSample(20)}, []tsdbutil.Sample{histogramSample(25), histogramSample(30)}),
|
||||||
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{histogramSample(18), histogramSample(26)}, []tsdbutil.Sample{histogramSample(31), histogramSample(35)}),
|
||||||
|
},
|
||||||
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
|
[]tsdbutil.Sample{histogramSample(0), histogramSample(2), histogramSample(5), histogramSample(10), histogramSample(15), histogramSample(18), histogramSample(20), histogramSample(25), histogramSample(26), histogramSample(30)},
|
||||||
|
[]tsdbutil.Sample{histogramSample(31), histogramSample(35)},
|
||||||
|
),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "histogram chunks overlapping with float chunks",
|
||||||
|
input: []ChunkSeries{
|
||||||
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{histogramSample(0), histogramSample(5)}, []tsdbutil.Sample{histogramSample(10), histogramSample(15)}),
|
||||||
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{12, 12, nil, nil}}, []tsdbutil.Sample{sample{14, 14, nil, nil}}),
|
||||||
|
},
|
||||||
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
|
[]tsdbutil.Sample{histogramSample(0)},
|
||||||
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}},
|
||||||
|
[]tsdbutil.Sample{histogramSample(5), histogramSample(10)},
|
||||||
|
[]tsdbutil.Sample{sample{12, 12, nil, nil}, sample{14, 14, nil, nil}},
|
||||||
|
[]tsdbutil.Sample{histogramSample(15)},
|
||||||
|
),
|
||||||
|
},
|
||||||
} {
|
} {
|
||||||
t.Run(tc.name, func(t *testing.T) {
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
merged := m(tc.input...)
|
merged := m(tc.input...)
|
||||||
|
@ -517,9 +560,9 @@ func TestConcatenatingChunkSeriesMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "single series",
|
name: "single series",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two empty series",
|
name: "two empty series",
|
||||||
|
@ -532,70 +575,70 @@ func TestConcatenatingChunkSeriesMerger(t *testing.T) {
|
||||||
{
|
{
|
||||||
name: "two non overlapping",
|
name: "two non overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{5, 5}}, []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two overlapping",
|
name: "two overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{8, 8}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{8, 8, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}}, []tsdbutil.Sample{sample{3, 3}, sample{8, 8}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}, []tsdbutil.Sample{sample{3, 3, nil, nil}, sample{8, 8, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{7, 7}, sample{9, 9}}, []tsdbutil.Sample{sample{10, 10}},
|
[]tsdbutil.Sample{sample{7, 7, nil, nil}, sample{9, 9, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "two duplicated",
|
name: "two duplicated",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{5, 5}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three overlapping",
|
name: "three overlapping",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{6, 6}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{6, 6, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0}, sample{4, 4}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{6, 6}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{6, 6, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{4, 4}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{4, 4, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three in chained overlap",
|
name: "three in chained overlap",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{4, 4}, sample{6, 66}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{4, 4, nil, nil}, sample{6, 66, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{6, 6}, sample{10, 10}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{6, 6, nil, nil}, sample{10, 10, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{5, 5}},
|
[]tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{5, 5, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{4, 4}, sample{6, 66}},
|
[]tsdbutil.Sample{sample{4, 4, nil, nil}, sample{6, 66, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{6, 6}, sample{10, 10}},
|
[]tsdbutil.Sample{sample{6, 6, nil, nil}, sample{10, 10, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "three in chained overlap complex",
|
name: "three in chained overlap complex",
|
||||||
input: []ChunkSeries{
|
input: []ChunkSeries{
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0}, sample{5, 5}}, []tsdbutil.Sample{sample{10, 10}, sample{15, 15}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}, sample{15, 15, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2}, sample{20, 20}}, []tsdbutil.Sample{sample{25, 25}, sample{30, 30}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{20, 20, nil, nil}}, []tsdbutil.Sample{sample{25, 25, nil, nil}, sample{30, 30, nil, nil}}),
|
||||||
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{18, 18}, sample{26, 26}}, []tsdbutil.Sample{sample{31, 31}, sample{35, 35}}),
|
NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"), []tsdbutil.Sample{sample{18, 18, nil, nil}, sample{26, 26, nil, nil}}, []tsdbutil.Sample{sample{31, 31, nil, nil}, sample{35, 35, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
expected: NewListChunkSeriesFromSamples(labels.FromStrings("bar", "baz"),
|
||||||
[]tsdbutil.Sample{sample{0, 0}, sample{5, 5}}, []tsdbutil.Sample{sample{10, 10}, sample{15, 15}},
|
[]tsdbutil.Sample{sample{0, 0, nil, nil}, sample{5, 5, nil, nil}}, []tsdbutil.Sample{sample{10, 10, nil, nil}, sample{15, 15, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{2, 2}, sample{20, 20}}, []tsdbutil.Sample{sample{25, 25}, sample{30, 30}},
|
[]tsdbutil.Sample{sample{2, 2, nil, nil}, sample{20, 20, nil, nil}}, []tsdbutil.Sample{sample{25, 25, nil, nil}, sample{30, 30, nil, nil}},
|
||||||
[]tsdbutil.Sample{sample{18, 18}, sample{26, 26}}, []tsdbutil.Sample{sample{31, 31}, sample{35, 35}},
|
[]tsdbutil.Sample{sample{18, 18, nil, nil}, sample{26, 26, nil, nil}}, []tsdbutil.Sample{sample{31, 31, nil, nil}, sample{35, 35, nil, nil}},
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -732,38 +775,38 @@ func TestChainSampleIterator(t *testing.T) {
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: []tsdbutil.Sample{sample{0, 0}, sample{1, 1}},
|
expected: []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{2, 2}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}},
|
expected: []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{1, 1}, sample{4, 4}}),
|
NewListSeriesIterator(samples{sample{1, 1, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{2, 2}, sample{5, 5}}),
|
NewListSeriesIterator(samples{sample{2, 2, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
},
|
},
|
||||||
expected: []tsdbutil.Sample{
|
expected: []tsdbutil.Sample{
|
||||||
sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}, sample{4, 4}, sample{5, 5},
|
sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}, sample{5, 5, nil, nil},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
// Overlap.
|
// Overlap.
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{2, 2}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{2, 2}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{}),
|
NewListSeriesIterator(samples{}),
|
||||||
NewListSeriesIterator(samples{}),
|
NewListSeriesIterator(samples{}),
|
||||||
NewListSeriesIterator(samples{}),
|
NewListSeriesIterator(samples{}),
|
||||||
},
|
},
|
||||||
expected: []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}},
|
expected: []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}},
|
||||||
},
|
},
|
||||||
} {
|
} {
|
||||||
merged := NewChainSampleIterator(tc.input)
|
merged := NewChainSampleIterator(tc.input)
|
||||||
|
@ -781,42 +824,42 @@ func TestChainSampleIteratorSeek(t *testing.T) {
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
},
|
},
|
||||||
seek: 1,
|
seek: 1,
|
||||||
expected: []tsdbutil.Sample{sample{1, 1}, sample{2, 2}},
|
expected: []tsdbutil.Sample{sample{1, 1, nil, nil}, sample{2, 2, nil, nil}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{2, 2}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
},
|
},
|
||||||
seek: 2,
|
seek: 2,
|
||||||
expected: []tsdbutil.Sample{sample{2, 2}, sample{3, 3}},
|
expected: []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{1, 1}, sample{4, 4}}),
|
NewListSeriesIterator(samples{sample{1, 1, nil, nil}, sample{4, 4, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{2, 2}, sample{5, 5}}),
|
NewListSeriesIterator(samples{sample{2, 2, nil, nil}, sample{5, 5, nil, nil}}),
|
||||||
},
|
},
|
||||||
seek: 2,
|
seek: 2,
|
||||||
expected: []tsdbutil.Sample{sample{2, 2}, sample{3, 3}, sample{4, 4}, sample{5, 5}},
|
expected: []tsdbutil.Sample{sample{2, 2, nil, nil}, sample{3, 3, nil, nil}, sample{4, 4, nil, nil}, sample{5, 5, nil, nil}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: []chunkenc.Iterator{
|
input: []chunkenc.Iterator{
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{2, 2}, sample{3, 3}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}}),
|
||||||
NewListSeriesIterator(samples{sample{0, 0}, sample{1, 1}, sample{2, 2}}),
|
NewListSeriesIterator(samples{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}}),
|
||||||
},
|
},
|
||||||
seek: 0,
|
seek: 0,
|
||||||
expected: []tsdbutil.Sample{sample{0, 0}, sample{1, 1}, sample{2, 2}, sample{3, 3}},
|
expected: []tsdbutil.Sample{sample{0, 0, nil, nil}, sample{1, 1, nil, nil}, sample{2, 2, nil, nil}, sample{3, 3, nil, nil}},
|
||||||
},
|
},
|
||||||
} {
|
} {
|
||||||
merged := NewChainSampleIterator(tc.input)
|
merged := NewChainSampleIterator(tc.input)
|
||||||
actual := []tsdbutil.Sample{}
|
actual := []tsdbutil.Sample{}
|
||||||
if merged.Seek(tc.seek) {
|
if merged.Seek(tc.seek) == chunkenc.ValFloat {
|
||||||
t, v := merged.At()
|
t, v := merged.At()
|
||||||
actual = append(actual, sample{t, v})
|
actual = append(actual, sample{t, v, nil, nil})
|
||||||
}
|
}
|
||||||
s, err := ExpandSamples(merged, nil)
|
s, err := ExpandSamples(merged, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue