Commit graph

55 commits

Author SHA1 Message Date
György Krajcsovits 79020b1e85 Comment float histogram as well
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2024-06-18 15:22:03 +02:00
György Krajcsovits c309f50ee7 Add comment to state intent of check
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2024-06-18 15:21:17 +02:00
György Krajcsovits 0793a26d96 native histograms: only reduce resolution for exponential histograms
Currently we can only reduce the resolution of exponential native
histograms, so checking the schema for that is slightly more precise
than checking against max schema.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2024-06-11 11:17:14 +02:00
Jeanette Tan 9adc1699c3 fix according to code review
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2024-06-07 18:50:59 +08:00
György Krajcsovits 5d0a0a7542 Add custom buckets to native histogram model (#13592)
* add custom buckets to native histogram model
* simple copy for custom bounds
* return errors for unsupported add/sub operations
* add test cases for string and update appendhistogram in scrape to account for new schema
* check fields which are supposed to be unused but may affect results in equals
* allow appending custom buckets histograms regardless of max schema

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2024-02-28 14:06:43 +01:00
Bryan Boreham 9ba13de220 scraping: pass a Builder to get Target labels
This saves memory allocations from making a new Builder every time.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Ziqi Zhao df2a0ecf3b
Native Histograms: support native_histogram_min_bucket_factor in scrape_config (#13222)
Native Histograms: support native_histogram_min_bucket_factor in scrape_config

---------

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
2024-01-17 16:58:54 +01:00
Bryan Boreham c83e1fc574 textparse: remove MetricType alias
No backwards-compatibility; make a clean break.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-12-19 18:56:54 +00:00
Bryan Boreham 99c17b4319
Merge pull request #13177 from bboreham/less-madness
scrape: consistent function names for metadata
2023-12-19 17:51:52 +00:00
Bryan Boreham 34676a240e scrape: consistent function names for metadata
Too confusing to have `MetadataList` and `ListMetadata`, etc.
I standardised on the ones which are in an interface.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-23 09:08:02 +00:00
Ziqi Zhao 8fe9250f7d optimize the logic of break the loop of reducing resolution
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-21 16:56:56 +08:00
Ziqi Zhao 10ebeb0a62 rename lastSum -> lastCount && enrich test case with expectBucketCount and expectSchema
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-16 13:00:11 +08:00
Ziqi Zhao b94f32f6fa automatically reduce resolution rather than fail scrape
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-11 22:24:47 +08:00
Matthieu MOREL fe057fc60d use Go standard errors package
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-11-03 07:26:31 +00:00
Oleksandr Redko fa90ca46e5 ci(lint): enable godot; append dot at the end of comments
Signed-off-by: Oleksandr Redko <Oleksandr_Redko@epam.com>
2023-10-31 19:53:38 +02:00
Oleksandr Redko 8e5f0387a2
ci(lint): enable nolintlint and remove redundant comments (#12926)
Signed-off-by: Oleksandr Redko <Oleksandr_Redko@epam.com>
2023-10-31 12:35:13 +01:00
Julius Volz ac8abdaacd
Rename remaining jitterSeed -> offsetSeed variables (#12414)
I had changed the naming from "jitter" to "offset" in:

cb045c0e4b

...but I forgot to add this file to the commit to complete the renaming,
doing that now.

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2023-06-05 17:36:11 +02:00
Jeanette Tan 2ad39baa72 Treat bucket limit like sample limit and make it fail the whole scrape and return an error
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-22 03:25:07 +08:00
Jeanette Tan 4d21ac23e6 Implement bucket limit for native histograms
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-22 03:14:19 +08:00
Matthieu MOREL fb3eb21230 enable gocritic, unconvert and unused linters
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-13 19:20:22 +00:00
Bryan Boreham b987afa7ef labels: simplify call to get Labels from Builder
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.

Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.

Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-22 17:05:20 +00:00
Bryan Boreham 2fde2fb37d scrape: add Target.LabelsRange
This allows users of a Target to iterate labels without allocating heap memory.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-16 20:35:13 +00:00
Bryan Boreham f4fd9b0d68 scrape: re-use memory in TargetsFromGroup
Common service discovery mechanisms such as Kubernetes can generate a
lot of target groups, so this function was allocating a lot of memory
which then immediately became garbage. Re-using the structures across
an entire Sync saves effort.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham 5cfe759348 scrape: make TargetsFromGroup work with Builder not []Label
Save work converting to `Labels` then to `Builder`.
`PopulateLabels()` now takes as Builder as input.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham c1dbc7b838 scrape: make PopulateLabels work with Builder not Labels
Save work converting to and fro.

Uses the recently-added relabel.ProcessBuilder variant.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham 91254fb187 Update package scrape for new labels.Labels type
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-12-19 15:22:09 +00:00
Bryan Boreham 8b863c42dd
Optimise relabeling by re-using memory (#11147)
* model/relabel: Add benchmark

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* model/relabel: re-use Builder across relabels

Saves memory allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* labels.Builder: allow re-use of result slice

This reduces memory allocations where the caller has a suitable slice available.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* model/relabel: re-use source values slice

To reduce memory allocations.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Unwind one change causing test failures

Restore original behaviour in PopulateLabels, where we must not overwrite the input set.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* relabel: simplify values optimisation

Use a stack-based array for up to 16 source labels, which will be the
vast majority of cases.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* lint

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2022-08-19 15:27:52 +05:30
Levi Harrison d61459d826
no-default-scrape-port feature flag (#9523)
* Add `no-default-scrape-port` flag

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2022-07-20 13:35:47 +02:00
Nicholas Blott c92673fb14
Remove check against cfg so interval/ timeout are always set (#10023)
Signed-off-by: Nicholas Blott <blottn@tcd.ie>
2021-12-16 13:28:46 +01:00
beorn7 c954cd9d1d Move packages out of deprecated pkg directory
This creates a new `model` directory and moves all data-model related
packages over there:
  exemplar labels relabel rulefmt textparse timestamp value

All the others are more or less utilities and have been moved to `util`:
  gate logging modetimevfs pool runtime

Signed-off-by: beorn7 <beorn@grafana.com>
2021-11-09 08:03:10 +01:00
Dieter Plaetinck cda025b5b5
TSDB: demistify SeriesRefs and ChunkRefs (#9536)
* TSDB: demistify seriesRefs and ChunkRefs

The TSDB package contains many types of series and chunk references,
all shrouded in uint types.  Often the same uint value may
actually mean one of different types, in non-obvious ways.

This PR aims to clarify the code and help navigating to relevant docs,
usage, etc much quicker.

Concretely:

* Use appropriately named types and document their semantics and
  relations.
* Make multiplexing and demuxing of types explicit
  (on the boundaries between concrete implementations and generic
  interfaces).
* Casting between different types should be free.  None of the changes
  should have any impact on how the code runs.

TODO: Implement BlockSeriesRef where appropriate (for a future PR)

Signed-off-by: Dieter Plaetinck <dieter@grafana.com>

* feedback

Signed-off-by: Dieter Plaetinck <dieter@grafana.com>

* agent: demistify seriesRefs and ChunkRefs

Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
2021-11-06 15:40:04 +05:30
Darshan Chaudhary a7e554b158
add check service-discovery command (#8970)
Signed-off-by: darshanime <deathbullet@gmail.com>
2021-11-01 14:42:12 +01:00
DrAuYueng 69e309d202
Expose TargetsFromGroup/AlertmanagerFromGroup func and reuse this for (#9343)
static/file sd config check in promtool

Signed-off-by: DrAuYueng <ouyang1204@gmail.com>
2021-10-28 02:01:28 +02:00
Julien Pivotto b0c98e01c8
Include scrape labels in the hash (#9551)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2021-10-20 23:44:45 +02:00
Levi Harrison 70f597b033
Configure Scrape Interval and Timeout Via Relabeling (#8911)
* Configure scrape interval and timeout with labels

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2021-08-31 17:37:32 +02:00
Levi Harrison 2826fbeeb7
SD: Add target creation failure counter and change failure handling (#8786)
* Added metric and changed failure/drop strategy

Signed-off-by: Levi Harrison <git@leviharrison.dev>
2021-05-28 23:50:59 +02:00
Tom Wilkie 7369561305
Combine Appender.Add and AddFast into a single Append method. (#8489)
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.

This makes the API easier to consume and implement.  In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
2021-02-18 17:37:00 +05:30
gotjosh 4eca4dffb8
Allow metric metadata to be propagated via Remote Write. (#6815)
* Introduce a metadata watcher

Similarly to the WAL watcher, its purpose is to observe the scrape manager and pull metadata. Then, send it to a remote storage.

Signed-off-by: gotjosh <josue@grafana.com>

* Additional fixes after rebasing.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Rework samples/metadata metrics.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Use more descriptive variable names in MetadataWatcher collect.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix issues caused during rebasing.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix missing metric add and unneeded config code.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Address some review comments.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix metrics and docs

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Replace assert with require

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Bring back max_samples_per_send metric

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix tests

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

Co-authored-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-11-19 20:53:03 +05:30
Bartlomiej Plotka 34426766d8 Unify Iterator interfaces. All point to storage now.
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.

* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.

No logic was changed, only different source of abstractions, so no need for benchmarks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:54 +00:00
gotjosh 8b49c9285d
scrape: Add metrics to track bytes and entries in the metadata cache (#6675)
Signed-off-by: gotjosh <josue@grafana.com>
2020-01-29 11:13:18 +00:00
gotjosh 05842176a6 Make the scrape.metricMetadataStore interface public
To test the implementation of our metric metadata API, we need to represent various states of metadata in the scrape metadata store. That is currently not possible as the interface and method to set the store are private.

This changes the interface, list and get methods, and the SetMetadaStore function to be public.

Incidentally, the scrapeCache implementation needs to be renamed to match the new signature.

Signed-off-by: gotjosh <josue@grafana.com>
2019-12-05 10:29:58 +00:00
Dustin Hooten ca60bf298c React UI: Implement /targets page (#6276)
* Add LastScrapeDuration to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Add Scrape job name to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Implement the /targets page in react

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Add state query param to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Use state filter in api call

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* api feedback

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* pr feedback frontend

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Implement and use localstorage hook

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* PR feedback

Signed-off-by: Dustin Hooten <dhooten@splunk.com>
2019-11-11 22:42:24 +01:00
AllenZMC 9e47bb8b46 fix word 'parmeters' to 'parameters' (#5826)
Signed-off-by: czm <zhongming.chang@daocloud.io>
2019-08-02 14:52:15 +01:00
Simon Pasquier 45506841e6
*: enable all default linters (#5504)
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2019-05-03 15:11:28 +02:00
Tariq Ibrahim 8fdfa8abea refine error handling in prometheus (#5388)
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors

Signed-off-by: tariqibrahim <tariq181290@gmail.com>
2019-03-26 00:01:12 +01:00
xjewer 0d1a69353e scrape: Add global jitter for HA server (#5181)
* scrape: Add global jitter for HA server

Covers issue in https://github.com/prometheus/prometheus/pull/4926#issuecomment-449039848
where the HA setup become a problem for targets unable to be scraped simultaneously.
The new jitter per server relies on the hostname and external labels which necessarily to be uniq.

As before, scrape offset will be calculated with regard the absolute time, so even
restart/reload doesn't change scrape time per scrape target + prometheus instance.

Use fqdn if possible, otherwise fall back to the hostname. It adds extra random seed
to calculate server hash to be distinguish on machines with the same hostname, but
different DC.

Signed-off-by: Aleksei Semiglazov <xjewer@gmail.com>
2019-03-12 10:46:15 +00:00
F4ncyMooN cd491e2d3a wait for interval-now%interval to make sure target will be collected with a fixed interval when restart prometheus (#4926)
Signed-off-by: hlv <hlv@freewheel.tv>
2018-12-05 09:58:39 +00:00
Brian Brazil 9c03e11c2c Hook OpenMetrics parser into scraping.
Extend metadata api to support units.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2018-10-18 13:58:00 +01:00
Will Hegedus 193ebe7e34 Updates to /targets and /rules (scrape duration, last evaluation time) (#4722)
* Add evaluationTimestamp (Last Evaluation) column to display on /rules
Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>

* Add lastScrapeDuration ("Scrape Duration") to display on /targets
Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>

* Updates based on Julius' feedback

Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>

* Update to set timestamp to when eval started (after eval completes)

Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>

* Update /rules to display time since last evaluation

Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>

* Re-order Last Eval/Eval Time to be consistent with targets page

Signed-off-by: Will Hegedus <wbhegedus@liberty.edu>
2018-10-12 18:26:59 +02:00
Fabian Reinartz ad4c33c1ff scrape,api: provide per-target metric metadata
This adds a per-target cache of scraped metadata. The metadata is only
available for the lifecycle of the attached target. An API endpoint allows
to select metadata by metric name and a label selection of targets.

Signed-off-by: Fabian Reinartz <freinartz@google.com>
2018-06-06 05:56:10 -04:00