Although we had a different slice, the underlying memory was the same so
any changes meant we could skip some values.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
During remote write, we call url.String() twice:
- to add the Endpoint() to the span
- to actually know where whe should send the request
This value does not change over time, and it's not really that
lightweight to calculate. I wrote this simple benchmark:
func BenchmarkURLString(b *testing.B) {
u, err := url.Parse("https://remote.write.com/api/v1")
require.NoError(b, err)
b.Run("string", func(b *testing.B) {
count := 0
for i := 0; i < b.N; i++ {
count += len(u.String())
}
})
}
And the results are ~200ns/op, 80B/op, 3 allocs/op.
Yes, we're going to go to the network here, which is a huge amount of
resources compared to this, but still, on agents that send 500 requests
per second, that is 1500 wasteful allocations per second.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Without this fix, if snapshots were enabled, and wbl goes missing
between restarts, then TSDB does not recognize that there are ooo
mmap chunks on disk and we cannot query them until those chunks
are compacted into blocks.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
If the snapshot was enabled with some ooo mmap chunks on disk,
and wbl was removed between restarts, then we should still be able
to query the ooo mmap chunks after a restart. This test shows that
we are not able to query those ooo mmap chunks after a restart
under this situation.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
M-map chunks replayed on startup are discarded if there
was no WAL and no snapshot loaded, because there is no
series created in the Head that it can map to. So only
load m-map chunks from disk if there is either a snapshot
loaded or there is WAL on disk.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Scraping targets are synced by creating the full set, then adding/removing any which have changed.
This PR speeds up the process of creating the full set.
I added a benchmark for `TargetsFromGroup`; it uses configuration from a typical Kubernetes SD.
The crux of the change is to do relabeling inside labels.Builder instead of converting to labels.Labels and back again for every rule. The change is broken into several commits for easier review.
This is a breaking change to `scrape.PopulateLabels()`, but `relabel.Process` is left as-is, with a new `relabel.ProcessBuilder` option.
In storage/remote, try converting to RecoverableError using errors.As,
instead of through direct casting.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
TestConcurrentRangeQueries runs many queries, up to 4 at the same time,
to try to expose any race conditions.
This change stops four of them from running with a thousand or more steps:
`holt_winters(a_X[1d], 0.3, 0.3)`
`changes(a_X[1d])`
`rate(a_X[1d])`
`absent_over_time(a_X[1d])`
Particularly when the test runs with `-race` in CI, this reduces the
time and resources required.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Direct imports:
go.opentelemetry.io/otel v1.14.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.14.0
go.opentelemetry.io/otel/sdk v1.14.0
go.opentelemetry.io/otel/trace v1.14.0
These seem to correspond:
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.39.0
Other direct dependencies required to update by Otel:
github.com/stretchr/testify v1.8.2
google.golang.org/grpc v1.53.0
Indirect dependencies required to update by Otel:
cloud.google.com/go/compute v1.15.1
github.com/cncf/xds/go v0.0.0-20230105202645-06c439db220b
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0
go.opentelemetry.io/otel/metric v0.36.0
Also the import of go.opentelemetry.io/otel/semconv had to be updated
to v1.17.0 to match 60f7d42d1e/sdk/resource/process.go (L25)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>