Split out the note about regex-matching into a separate paragraph to
make it more obvious. Move it up, closer to the definition.
Signed-off-by: SuperQ <superq@gmail.com>
This follows the line of argument that the invariant of not looking
ahead of the query time was merely emerging behavior and not a
documented stable feature. Any query that looks ahead of the query
time was simply invalid before the introduction of the negative offset
and the @ modifier.
Signed-off-by: beorn7 <beorn@grafana.com>
Following the argument that breaking the invariant that PromQL does
not look ahead of the evaluation time implies a breaking change, we
still need to keep the feature flag around, but at least we can
communicate that the feature is considered stable, and that the
feature flags will be ignored from v3 on.
Signed-off-by: beorn7 <beorn@grafana.com>
* rework the target page
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* put back the URL of the endpoint
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* replace old code by the new one and change function style
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* align filter and search bar on the same row
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* remove unnecessary return
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* upgrade kvsearch to v0.3.0
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix unit test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* add missing style on column
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* add placeholder and autofocus
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* put back the previous table design
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix issue relative to the position of the tooltip
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix health filter
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix test on label tooltip
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* simplify filter condition
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* rework service discovery page
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* introduced generic custom infinite scroll component
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* adjust the placeholder in discovery page
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* ignore returning type missing
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* apply fix required by the review
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* index discoveredLabels
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* rework the target page
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* put back the URL of the endpoint
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* replace old code by the new one and change function style
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* align filter and search bar on the same row
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* remove unnecessary return
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* upgrade kvsearch to v0.3.0
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix unit test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* add missing style on column
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* add placeholder and autofocus
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* put back the previous table design
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix issue relative to the position of the tooltip
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix health filter
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix test on label tooltip
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* simplify filter condition
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* Write chunks via queue, predicting the refs
Our load tests have shown that there is a latency spike in the
remote write handler whenever the head chunks need to be written,
because chunkDiskMapper.WriteChunk() blocks until the chunks are written
to disk.
This adds a queue to the chunk disk mapper which makes the WriteChunk()
method non-blocking unless the queue is full. Reads can still be served
from the queue.
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* address PR feeddback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* initialize metrics without .Add(0)
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* change isRunningMtx to normal lock
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* do not re-initialize chunkrefmap
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* update metric outside of lock scope
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* add benchmark for adding job to chunk write queue
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* remove unnecessary "success" var
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* gofumpt -extra
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* avoid WithLabelValues call in addJob
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* format comments
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* addressing PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* rename cutExpectRef to cutAndExpectRef
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* use head.Init() instead of .initTime()
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* address PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* PR feedback
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* update test according to PR feedback
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* replace callbackWg -> awaitCb
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* better test of truncation with empty files
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
* replace callbackWg -> awaitCb
Signed-off-by: Mauro Stettler <mauro.stettler@gmail.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Previously we would reject an increase from 2 to 2.5 as being
within 30%; by rounding up first we see this as an increase from 2 to 3.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Change the coefficient from 1% to 5%, so instead of targetting to clear
the backlog in 100s we target 20s.
Update unit test to reflect the new behaviour.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Note that go.mod has been manually modified to use the following
versions:
- k8s.io/api v0.22.4
- k8s.io/apimachinery v0.22.4
- k8s.io/client-go v0.22.4
The current version of those modules requires Go1.17, which we only
intend to require once Go1.18 is out.
Signed-off-by: beorn7 <beorn@grafana.com>
Unexported postingsWithIndexHeap's methods that don't need to be
exported, and added detailed comments.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* tsdb: fix exemplar benchmarks
Go benchmarks are expected to do an amount of work that varies with
the `b.N` parameter. Previously these benchmarks would report a result
like 0.01 ns/op, which is nonsense.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* tsdb: use simpler map key to improve exemplar perf
Prometheus holds an index of exemplars so it can discard the oldest one
for a series when a new one is added.
Since the keys are not for human eyes, we can use a simpler format
and save the effort of quoting label values.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Exemplars: allocate index map with estimated size
This avoids Go having to re-size the map several times as it grows.
16 exemplars per series is a guess; if it is too low then the map will
be sparse, while if it is too high then the map will have to resize once
or twice.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Since `/api/v1/write` is a mutating endpoint, we should still activate
the remote-write-receiver explicitly. But we should do it in the same
way as the other mutating endpoints, i.e. via a flag
`--web.enable-remote-write-receiver`.
This commit marks the feature flag as deprecated, i.e. it still works
but logs a warning on startup. This enables users to seamlessly
migrate. With the next minor release, we can start ignoring the
feature flag (but still warn a user that is trying to use it).
Signed-off-by: beorn7 <beorn@grafana.com>
See this comment for detailed explanation:
https://github.com/prometheus/prometheus/pull/9907#issuecomment-1002189932
TL;DR: if we don't call Pop() on the heap implementation, we don't need
to return our param as an `interface{}` so we save an allocation.
This would be popped for every label value, so it can be thousands of
saved allocations here (see benchmarks).
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
This adds the following:
- Find a new volunteer for the release shepherd if needed.
- Consider experimental features for promotion/demotion.
The latter could need more detailed explanation, but I guess we can
add those once we have the planned metrics for feature flags. For now,
I just wanted to have that point in so that it is not completely
forgotten.
Signed-off-by: beorn7 <beorn@grafana.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
In a meeting at Grafana Labs, we had the shocking revelation that the
next release is scheduled to happen soon, but there is no release
shepherd for it yet.
(Which reminds me: Perhaps we should make it a responsibility of the
release shepherd to make sure there is one for the next release, too?)
Given my current part-time commitment, I would actually try to avoid
being the release shepherd, but on the other hand, it's about time
that a Grafanista to do the job again, and the only one who could do
it at all in the room was me. @csmarchbanks volunteered for the next
release. So here we are.
Of course, if any other team member would like to jump in and do this
release or the next, we'll happily insert you into the list.
Signed-off-by: beorn7 <beorn@grafana.com>
We have an alert that fires when prometheus_remote_storage_highest_timestamp_in_seconds - prometheus_remote_storage_queue_highest_sent_timestamp_seconds
becomes too high. But we have an agent that fires this when the remote "rate-limits" the user.
This is because prometheus_remote_storage_queue_highest_sent_timestamp_seconds doesn't get updated
when the remote sends a 429.
I think we should update the metrics, and the change I made makes sense. Because if the requests fails
because of connectivity issues, etc. we will never exit the `sendWriteRequestWithBackoff` function. It only
exits the function when there is a non-recoverable error, like a bad status code, and in that case, I think
the metric needs to be updated.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>