During remote write, we call url.String() twice:
- to add the Endpoint() to the span
- to actually know where whe should send the request
This value does not change over time, and it's not really that
lightweight to calculate. I wrote this simple benchmark:
func BenchmarkURLString(b *testing.B) {
u, err := url.Parse("https://remote.write.com/api/v1")
require.NoError(b, err)
b.Run("string", func(b *testing.B) {
count := 0
for i := 0; i < b.N; i++ {
count += len(u.String())
}
})
}
And the results are ~200ns/op, 80B/op, 3 allocs/op.
Yes, we're going to go to the network here, which is a huge amount of
resources compared to this, but still, on agents that send 500 requests
per second, that is 1500 wasteful allocations per second.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
In storage/remote, try converting to RecoverableError using errors.As,
instead of through direct casting.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
As far as I understand it, we'd never expect to receive a nil span,
and remote.spansProtoToSpans would panic if we received a nil span.
Marking the fields as non-nullable also means the generated Golang
code doesn't use pointers for these fields, reducing allocations.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
This is a bit more conservative than we could be. As long as a chunk
isn't the first in a block, we can be pretty sure that the previous
chunk won't disappear. However, the incremental gain of returning
NotCounterReset in these cases is probably very small and might not be
worth the code complications.
Wwith this, we now also pay attention to an explicitly set counter
reset during ingestion. While the case doesn't show up in practice
yet, there could be scenarios where the metric source knows there was
a counter reset even if it might not be visible from the values in the
histogram. It is also useful for testing.
Signed-off-by: beorn7 <beorn@grafana.com>
This is an optimization on the existing append in OOOChunk.
What we've been doing so far is find the place inside the out-of-order
slice where the new sample should go in and then place it there and move
any samples to the right if necessary. This is OK but requires a binary
search every time the slice is bigger than 0.
The optimization is opinionated and suggests that although out-of-order
samples can be out-of-order amongst themselves they'll probably be in
order thus we can probably optimistically append at the end and if not
do the binary search.
OOOChunks are capped to 30 samples by default so this is a small
optimization but everything adds up, specially if you handle many active
timeseries with out-of-order samples.
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
* adapt code.go and write_handler.go to support float histograms
* adapt watcher.go to support float histograms
* wip adapt queue_manager.go to support float histograms
* address comments for metrics in queue_manager.go
* set test cases for queue manager
* use same counts for histograms and float histograms
* refactor createHistograms tests
* fix float histograms ref in watcher_test.go
* address PR comments
Signed-off-by: Marc Tuduri <marctc@protonmail.com>
In most cases, there is no sample at `maxt`, so `PeekBack` has to be
used. So far, `PeekBack` did not return a float histogram, and we
disregarded even any returned normal histogram. This fixes both, and
also tweaks the unit test to discover the problem (by using an earlier
timestamp than "now" for the samples in the TSDB).
Signed-off-by: beorn7 <beorn@grafana.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
`QueueManager.externalLabels` becomes a slice rather than a `Labels` so
we can index into it when doing the merge operation.
Note we avoid calling `Labels.Len()` in `labelProtosToLabels()`.
It isn't necessary - `append()` will enlarge the buffer and we're
expecting to re-use it many times.
Also, we now validate protobuf input before converting to Labels.
This way we can detect errors first, and we don't place unnecessary
requirements on the Labels structure.
Re-do seriesFilter using labels.Builder (albeit N^2).
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is required to preserve the interface property of SeriesSet that says
"At returns full series. Returned series should be iterable even after Next is called."
Signed-off-by: sniper91 <kevinzhao91@outlook.com>
Re-use previous memory if it is already of the correct type.
In `NewListSeries` we hoist the conversion to an interface value out
so it only allocates once.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Patterned after `Chunk.Iterator()`: pass the old iterator in so it
can be re-used to avoid allocating a new object.
(This commit does not do any re-use; it is just changing all the method
signatures so re-use is possible in later commits.)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The SeriesSets to be merged must be created each time round the loop,
otherwise the benchmark is not doing any real work.
Don't call ExpandSeries, because it spends most of its time allocating
a memory buffer to hold the result, which we don't look at.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Fix up merge test again
errors.Unwrap() actually dangerously returns nil if the error does not have an
Unwrap() method, which is the case in at least one of these places where I
noticed that no error was being logged at all when it should have.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* remote/read_handler: pool input to Marshal()
Use a sync.Pool to reuse byte slices between calls to Marshal() in the
remote read handler.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
* remote: add microbenchmark for remote read handler
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
The wlog.WL type can now be used to create a Write Ahead Log or a Write
Behind Log.
Before the prefix for wbl metrics was
'prometheus_tsdb_out_of_order_wal_' and has been replaced with
'prometheus_tsdb_out_of_order_wbl_'.
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
Signed-off-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>