We still need a guide that we can link users to in https://github.com/prometheus/docs/tree/main/content/docs/guides
This guide should show sending metrics from application directly via
the OTel SDKs and also sending through the Collector.
Signed-off-by: Goutham <gouthamve@gmail.com>
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.
Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.
To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Add OTLP Ingestion endpoint
We copy files from the otel-collector-contrib. See the README in
`storage/remote/otlptranslator/README.md`.
This supersedes: https://github.com/prometheus/prometheus/pull/11965
Signed-off-by: gouthamve <gouthamve@gmail.com>
* Return a 200 OK
It is what the OTEL Golang SDK expect :(
https://github.com/open-telemetry/opentelemetry-go/issues/4363
Signed-off-by: Goutham <gouthamve@gmail.com>
---------
Signed-off-by: gouthamve <gouthamve@gmail.com>
Signed-off-by: Goutham <gouthamve@gmail.com>
If a new series is introduced in a storage.Appender instance, that
series should be written to the WAL once the storage.Appender is closed,
even on Rollback.
Previously, new series would only be written to the WAL when calling
Commit. However, because the series is stored in memory regardless,
subsequent calls to Commit may write samples to the WAL which reference
a series ID which that was never written.
Related to #11589. It's likely that this fix also resolves this issue,
but we need more testing from users to see if the problem persists after
this fix; there may be more cases where samples get written to the WAL
in Prometheus Agent mode without the corresponding series record.
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
Native histograms without observations and with a zero threshold of
zero look the same as classic histograms in the protobuf exposition
format. According to
https://github.com/prometheus/client_golang/issues/1127 , the idea is
to add a no-op span to those histograms to mark them as native
histograms. This commit enables Prometheus to detect that no-op span
and adds a doc comment to the proto spec describing the behavior.
Signed-off-by: beorn7 <beorn@grafana.com>
The operator changes the meaning of the metric, so the metric name should
be dropped. Technically this would be a breaking change, but it's also very
obviously a bug and not likely that anyone depends on it.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
The bounds weren't really used so far, so no actual bug in the code so
far. But it's obviously confusing if the bounds returned by a
floatBucketIterator with a target schema different from the original
schema are wrong.
Signed-off-by: beorn7 <beorn@grafana.com>
If a float histogram has a zero bucket with a threshold of zero _and_
an empty zero bucket, it wasn't identified as a native histogram
because the `isNativeHistogram` helper function only looked at integer
buckets.
Signed-off-by: beorn7 <beorn@grafana.com>
Native histograms without a zero threshold aren't federated properly.
This adds a test to prove the specific failure mode, which is that
histograms with a zero threshold of zero are federated as classic
histograms.
The underlying reason is that the protobuf parser identifies a native
histogram by detecting a zero bucket or by detecting integer buckets.
Therefore, a float histogram with a zero threshold of zero and an
unpopulated zero bucket falls through the cracks (no integer buckets,
no zero bucket).
This commit also addse a test case for the latter.
Signed-off-by: beorn7 <beorn@grafana.com>
Version 2 introduced a breaking change in the `id` field of all
resources. They were changed from `int` to `int64` to make sure that all
future numerical IDs are supported on all architectures.
You can learn more about this
[here](https://docs.hetzner.cloud/#deprecation-notices-%E2%9A%A0%EF%B8%8F)
Signed-off-by: Julian Tölle <julian.toelle@hetzner-cloud.de>
InstanceSpec struct members are untyped integers, so they can overflow
on 32-bit arch when bit-shifted left.
Signed-off-by: Daniel Swarbrick <daniel.swarbrick@gmail.com>