* Update querier.go to support Head compaction with histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Add test for Head compaction with histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix tests
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Do not panic on histoAppender.Append
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* M-map all chunks on shutdown
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Support negative schema for querying
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Extend promtool to support compaction analysis
This commit extends the promtool tsdb analyze command to help
troubleshoot high Prometheus disk usage. The command now plots a
distribution of how full chunks are relative to the maximum capacity of
120 samples per chunk.
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
* Update cmd/promtool/tsdb.go
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix: Use json.Unmarshal() instead of json.Decoder
See https://ahmet.im/blog/golang-json-decoder-pitfalls/
json.Decoder is for JSON streams, not single JSON objects / bodies.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Revert modifications to targetgroup parsing
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* bucketIterator which returns all valid bucket indices for a []span
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* support for comparing []spans and generating interjections
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add license header
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* assert order fix
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* handle pathological 0-length span case more gracefully
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* stale todo
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* decode-recode histograms when new buckets appear
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* factor out recoding and also add it to the fallback case
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* make linter happy
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Add sorting and filtering to flags page
Signed-off-by: Dustin Hooten <dustinhooten@gmail.com>
* Make filter understand
Signed-off-by: Dustin Hooten <dustinhooten@gmail.com>
* split big state object into smaller ones
Signed-off-by: Dustin Hooten <dustinhooten@gmail.com>
* use fuzzy match and sanitize html for search results
Signed-off-by: Dustin Hooten <dustinhooten@gmail.com>
* use fuzzy.filter
Signed-off-by: Dustin Hooten <dustinhooten@gmail.com>
* replace fuzzy lib by @nexucis/fuzzy + fix flags issues
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* replace fuzzy by @nexucis/fuzzy in ExpressionInput.tsx
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* remove fuzzy lib from package.json
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix flags test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* simplify the input in the fuzzy search
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* cleanup html to be easily compatible with the dark theme
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix filtering when there is no result
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* use id to fix the test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
Co-authored-by: Dustin Hooten <dustinhooten@gmail.com>
This "brings back" protobuf parsing, with the only goal to play with
the new sparse histograms.
The Prom-2.x style parser is highly adapted to the structure of the
Prometheus text format (and later OpenMetrics). Some jumping through
hoops is required to feed protobuf into it.
This is not meant to be a model for the final implementation. It
should just enable sparse histogram ingestion at a reasonable
efficiency.
Following known shortcomings and flaws:
- No tests yet.
- Summaries and legacy histograms, i.e. without sparse buckets, are
ignored.
- Staleness doesn't work (but this could be fixed in the appender, to
be discussed).
- No tricks have been tried that would be similar to the tricks the
text parsers do (like direct pointers into the HTTP response
body). That makes things weird here. Tricky optimizations only make
sense once the final format is specified, which will almost
certainly not be the old protobuf format. (Interestingly, I expect
this implementation to be in fact much more efficient than the
original protobuf ingestion in Prom-1.x.)
- This is using a proto3 version of metrics.proto (mostly to be
consistent with the other protobuf uses). However, proto3 sees no
difference between an unset field. We depend on that to distinguish
between an unset timestamp and the timestamp 0 (1970-01-01, 00:00:00
UTC). In this experimental code, we just assume that timestamp is
never specified and therefore a timestamp of 0 always is interpreted
as "not set".
Signed-off-by: beorn7 <beorn@grafana.com>
* Added feature flag support to unit tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added/fixed tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Addressed review comments
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* factor out different varbit schemes and include Beorn's "optimum" for buckets
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* use more compact dod encoding scheme for SHS chunk columns
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* remove FB VB and xor dod encoding because we won't use it
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* HistoChunk metadata encoding
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add SparseHistogram.Copy()
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* histogram test: test appending a few histograms
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* add license headers
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Added selection flot plugin
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added time selection
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Removed irrelevant line in license header
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Append sparse histograms into the Head block
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Add AtHistogram() to Iterator interface. Make HistoChunk conform to Chunk interface.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* integer types and timestamp separation
1) unify types to int64. as suggested by beorn. we want to support
counters going down (resets) even if we plan to create new chunks for
now, in that case
2) histogram type doesn't know its own timestamp. include it separately
in appending and iteration
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* correction: count and zeroCount to remain unsigned
to make api more resilient and that's what we use in protobuf anyway
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* temp hack. Ganesh will fix
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* promtool: backfill: allow configuring block duration
When backfilling large amounts of data across long periods of time, it
may in certain circumstances be useful to use a longer block duration to
increase the efficiency and speed of the backfilling process. This patch
adds a flag --block-duration-power to allow a user to choose the power N
where the block duration is 2^(N+1)h.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* promtool: use sub-tests in backfill testing
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* backfill: add messages to tests for clarity
When someone new breaks a test, seeing "expected: false, got: true" is
really not useful. A nice message helps here.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* backfill: test long block durations
A test that uses a long block duration to write bigger blocks is added.
The check to make sure all blocks are the default duration is removed.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
All this is doing is wrapping the inner alert details display with a
conditional `{open && ...}`.
This already improves https://github.com/prometheus/prometheus/issues/8548 a
lot for cases where there are many individual firing/pending alert elements
under each alerting rule.
E.g. for a list of 200 rules with ~100 alert elements each, this changed the page
render time from 30 seconds to 1s.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
We cannot just use prometheus/client_model directly because we want to
stay consistent with the use of gogo-protobuf. So this converts
metrics.proto to proto3 and edits it lightly so that it fits into
the framework how prometheus/prometheus handles protobuf.
Note that metrics.proto couldn't be merged into the prompb package
because prompb already has an Exemplar type, which is different from
the Exemplar type in metrics.proto. The directory structure seems to
play a role in the protobuf world, so I better kept it.
Signed-off-by: beorn7 <beorn@grafana.com>
Push updates to the repo sync PRs if there is already a PR open. This
allows for cumulative updates to be synced.
Signed-off-by: SuperQ <superq@gmail.com>
* Added MaxSamplesPerSend
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed order of require
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added docs
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* writes -> writesReceived
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Improved send loop
Signed-off-by: Levi Harrison <git@leviharrison.dev>