Commit graph

365 commits

Author SHA1 Message Date
Łukasz Mierzwa 50c81bed86 Check for duplicated series on a scrape
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that.
This change adds a test for that and fixes the scrape loop so that:

- Only first sample for each unique time series is appended
- Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric

This allows one to identify such scrape jobs and targets.

Benchmark results:

```
name                            old time/op    new time/op    delta
ScrapeLoopAppend-8                64.8µs ± 2%    71.1µs ±20%   +9.75%  (p=0.000 n=10+10)
ScrapeLoopAppendOM-8              64.2µs ± 1%    68.5µs ± 7%   +6.71%  (p=0.000 n=9+10)
TargetsFromGroup/1_targets-8      14.2µs ± 1%    14.5µs ± 1%   +1.99%  (p=0.000 n=10+10)
TargetsFromGroup/10_targets-8      149µs ± 1%     152µs ± 1%   +2.05%  (p=0.000 n=9+10)
TargetsFromGroup/100_targets-8    1.49ms ± 4%    1.48ms ± 1%     ~     (p=0.796 n=10+10)

name                            old alloc/op   new alloc/op   delta
ScrapeLoopAppend-8                19.9kB ± 1%    17.8kB ± 3%  -10.23%  (p=0.000 n=8+10)
ScrapeLoopAppendOM-8              19.9kB ± 1%    18.3kB ±10%   -8.14%  (p=0.001 n=9+10)
TargetsFromGroup/1_targets-8      2.43kB ± 0%    2.43kB ± 0%   -0.15%  (p=0.045 n=10+10)
TargetsFromGroup/10_targets-8     24.3kB ± 0%    24.3kB ± 0%     ~     (p=0.083 n=10+9)
TargetsFromGroup/100_targets-8     243kB ± 0%     243kB ± 0%     ~     (p=0.720 n=9+10)

name                            old allocs/op  new allocs/op  delta
ScrapeLoopAppend-8                  9.00 ± 0%      9.00 ± 0%     ~     (all equal)
ScrapeLoopAppendOM-8                10.0 ± 0%      10.0 ± 0%     ~     (all equal)
TargetsFromGroup/1_targets-8        40.0 ± 0%      40.0 ± 0%     ~     (all equal)
TargetsFromGroup/10_targets-8        400 ± 0%       400 ± 0%     ~     (all equal)
TargetsFromGroup/100_targets-8     4.00k ± 0%     4.00k ± 0%     ~     (all equal)
```

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Łukasz Mierzwa 1a8ea78207 Fix BenchmarkScrapeLoopAppendOM
OpenMetrics requires EOF comment at the end of metrics body, but the makeTestMetrics() function doesn't append it.
This means this benchmark tests a response with errors but I don't think that was the intention.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Bryan Boreham 5f50d974c9 scraping: reset symbol table periodically
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Bryan Boreham 4e748b9cd8 scraping: re-use labels Builder in scrape report metrics
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Bryan Boreham abb3a62f04 scraping: re-use symbol table for scrape loops
One symbol table for all loops in the same scrape pool, i.e. from the
same job.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Bryan Boreham 0403d098e1 scraping: re-use symbolTable for target discovery
Call labels.NewBuilderWithSymbolTable.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Bryan Boreham 9ba13de220 scraping: pass a Builder to get Target labels
This saves memory allocations from making a new Builder every time.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Łukasz Mierzwa 5597020a60 Use github.com/klauspost/compress for gzip and zlib
klauspost/compress is a high quality drop-in replacement for common Go
compression libraries. Since Prometheus sends out a lot of HTTP requests
that often return compressed output having improved compression
libraries helps to save cpu & memory resources.
On a test Prometheus server I was able to see cpu reduction from 31 to
30 cores.

Benchmark results:

name                                old time/op    new time/op    delta
TargetScraperGzip/metrics=1-8         69.4µs ± 4%    69.2µs ± 3%     ~     (p=0.122 n=50+50)
TargetScraperGzip/metrics=100-8       84.3µs ± 2%    80.9µs ± 2%   -4.02%  (p=0.000 n=48+46)
TargetScraperGzip/metrics=1000-8       296µs ± 1%     274µs ±14%   -7.35%  (p=0.000 n=47+45)
TargetScraperGzip/metrics=10000-8     2.06ms ± 1%    1.66ms ± 2%  -19.34%  (p=0.000 n=47+45)
TargetScraperGzip/metrics=100000-8    20.9ms ± 2%    17.5ms ± 3%  -16.50%  (p=0.000 n=49+50)

name                                old alloc/op   new alloc/op   delta
TargetScraperGzip/metrics=1-8         6.06kB ± 0%    6.07kB ± 0%   +0.24%  (p=0.000 n=48+48)
TargetScraperGzip/metrics=100-8       7.04kB ± 0%    6.89kB ± 0%   -2.17%  (p=0.000 n=49+50)
TargetScraperGzip/metrics=1000-8      9.02kB ± 0%    8.35kB ± 1%   -7.49%  (p=0.000 n=50+50)
TargetScraperGzip/metrics=10000-8     18.1kB ± 1%    16.1kB ± 2%  -10.87%  (p=0.000 n=47+47)
TargetScraperGzip/metrics=100000-8    1.21MB ± 0%    1.01MB ± 2%  -16.69%  (p=0.000 n=36+50)

name                                old allocs/op  new allocs/op  delta
TargetScraperGzip/metrics=1-8           71.0 ± 0%      72.0 ± 0%   +1.41%  (p=0.000 n=50+50)
TargetScraperGzip/metrics=100-8         81.0 ± 0%      76.0 ± 0%   -6.17%  (p=0.000 n=50+50)
TargetScraperGzip/metrics=1000-8        92.0 ± 0%      83.0 ± 0%   -9.78%  (p=0.000 n=50+50)
TargetScraperGzip/metrics=10000-8       93.0 ± 0%      91.0 ± 0%   -2.15%  (p=0.000 n=50+50)
TargetScraperGzip/metrics=100000-8       111 ± 0%       135 ± 1%  +21.89%  (p=0.000 n=40+50)

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-22 17:08:15 +00:00
Łukasz Mierzwa 92e381b8a3 Add a scrape benchmark with gzipped responses
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-22 17:07:22 +00:00
Julien Pivotto e22b564049 Always align scrapes even if tolerance is bigger than 1% of scrape interval
When the scrape tolerance is bigger than 1% of the scrape interval, take
1% of the scrape interval as the tolerance instead of not aligning the
scrape at all.

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2024-02-21 15:09:21 +01:00
Owen Williams a28d7865ad UTF-8: Add support for parsing UTF8 metric and label names
This adds support for the new grammar of `{"metric_name", "l1"="val"}` to promql and some of the exposition formats.
This grammar will also be valid for non-UTF-8 names.
UTF-8 names will not be considered valid unless model.NameValidationScheme is changed.

This does not update the go expfmt parser in text_parse.go, which will be addressed by https://github.com/prometheus/common/issues/554/.

Part of https://github.com/prometheus/prometheus/issues/13095

Signed-off-by: Owen Williams <owen.williams@grafana.com>
2024-02-15 14:34:37 -05:00
Bryan Boreham 17f48f2b3b Tests: use replacement DeepEquals in more places
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-08 19:32:33 +00:00
Bryan Boreham d0dee51aac scrape tests: check NaN values directly
Normally, a NaN value is never equal to any other value. Compare sample
values via `Float64bits` so that NaN values which are exactly the same
will compare equal.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-08 19:30:20 +00:00
Bryan Boreham 39af788dbd Tests: use replacement DeepEquals using go-cmp
Use DeepEqual replacement using go-cmp, which is more flexible.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-08 19:30:20 +00:00
Paweł Szulik b0c538787d Refactor scrape tests to use testify.
Signed-off-by: Paweł Szulik <paul.szulik@gmail.com>
2024-02-01 13:51:31 +00:00
Bryan Boreham 4ad9b6df2e
Merge pull request #13336 from machine424/flakky
scrape_test.go: Increase scrape interval in TestScrapeLoopCache
to reduce potential flakiness.
2024-01-18 14:12:55 +00:00
Ziqi Zhao df2a0ecf3b
Native Histograms: support native_histogram_min_bucket_factor in scrape_config (#13222)
Native Histograms: support native_histogram_min_bucket_factor in scrape_config

---------

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
2024-01-17 16:58:54 +01:00
machine424 2f60177203
scrape_test.go: Increase scrape interval in TestScrapeLoopCache to reduce potential flakiness
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
2023-12-27 19:25:12 +01:00
Julien Pivotto 0763ec841b
Merge pull request #13313 from kalpadiptyaroy/fix-quality-value-accept-header
bug: Fix quality value in accept header
2023-12-21 11:40:30 +01:00
Kumar Kalpadiptya Roy b012366c33 Issue #13268: fix quality value in accept header
Signed-off-by: Kumar Kalpadiptya Roy <kalpadiptya.roy@outlook.com>
2023-12-21 10:33:05 +05:30
Bryan Boreham 75fc8a1535
Merge pull request #13167 from bboreham/simplify-TargetsActive
scrape: simplify TargetsActive function
2023-12-20 12:27:50 +00:00
Bryan Boreham c83e1fc574 textparse: remove MetricType alias
No backwards-compatibility; make a clean break.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-12-19 18:56:54 +00:00
Bryan Boreham 8065bef172 Move metric type definitions to common/model
They are used in multiple repos, so common is a better place for them.
Several packages now don't depend on `model/textparse`, e.g.
`storage/remote`.

Also remove `metadata` struct from `api.go`, since it was identical to
a struct in the `metadata` package.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-12-19 18:56:54 +00:00
Bryan Boreham 99c17b4319
Merge pull request #13177 from bboreham/less-madness
scrape: consistent function names for metadata
2023-12-19 17:51:52 +00:00
Arthur Silva Sens 5082655392
Append Created Timestamps (#12733)
* Append created timestamps.

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Log when created timestamps are ignored

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Proposed changes to Append CT PR.

Changes:

* Changed textparse Parser interface for consistency and robustness.
* Changed CT interface to be more explicit and handle validation.
* Simplified test, change scrapeManager to allow testability.
* Added TODOs.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Updates.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Addressed comments.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Refactor head_appender test

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Fix linter issues

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Use model.Sample in head appender test

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

---------

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
Signed-off-by: bwplotka <bwplotka@gmail.com>
Co-authored-by: bwplotka <bwplotka@gmail.com>
2023-12-11 08:43:42 +00:00
Filip Petkovski 10a82f87fd
Enable reusing memory when converting between histogram types
The 'ToFloat' method on integer histograms currently allocates new memory
each time it is called.

This commit adds an optional *FloatHistogram parameter that can be used
to reuse span and bucket slices. It is up to the caller to make sure the
input float histogram is not used anymore after the call.

Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
2023-12-08 10:22:59 +01:00
Matthieu MOREL 9c4782f1cc
golangci-lint: enable testifylint linter (#13254)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-12-07 11:35:01 +00:00
Björn Rabenstein 980e2895a2
Merge pull request #13129 from fatsheep9146/reduce-resolution-automatically
Native Histograms: automatically reduce resolution rather than fail scrape
2023-11-28 17:26:36 +01:00
Ziqi Zhao 19ecc5dd94 add test case for bigGap
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-26 22:20:44 +08:00
Julien Pivotto 965e603fa7
Merge pull request #13184 from bboreham/exemplar-sort
Scraping: use slices.sort for exemplars
2023-11-25 09:34:48 +01:00
Julien Pivotto eda73dd3e5
Merge pull request #13187 from bboreham/refactor-newscrapeloop
Scraping tests: refactor scrapeLoop creation
2023-11-24 19:48:44 +01:00
Bryan Boreham 3e287e0170 Scraping tests: refactor scrapeLoop creation
Pull boilerplate code into a function. Where appropriate we set some
config on the returned object.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-24 17:28:09 +00:00
Bryan Boreham 784a2d2c74
Merge pull request #12992 from bboreham/single-scrape-buffer-pool
Scraping: share buffer pool across all scrapes
2023-11-24 16:26:19 +00:00
Bryan Boreham f0e1b592ab Scraping: use slices.sort for exemplars
The sort implementation using Go generics is used everywhere else
in Prometheus.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-24 14:42:26 +00:00
Paulin Todev 0102425af1
Use only one scrapeMetrics object per test. (#13051)
The scrape loop and scrape cache should use the same instance.
This brings the tests' behavior more in line with production.

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
2023-11-23 11:24:08 +00:00
Bryan Boreham 9051100aba Scraping: share buffer pool across all scrapes
Previously we had one per scrapePool, and one of those per configured
scraping job. Each pool holds a few unused buffers, so sharing one
across all scrapePools reduces total heap memory.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-23 10:23:34 +00:00
Bryan Boreham 34676a240e scrape: consistent function names for metadata
Too confusing to have `MetadataList` and `ListMetadata`, etc.
I standardised on the ones which are in an interface.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-23 09:08:02 +00:00
Ziqi Zhao 8fe9250f7d optimize the logic of break the loop of reducing resolution
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-21 16:56:56 +08:00
Bryan Boreham f095c33da1 scrape: simplify TargetsActive function
Since everything was serialized on a single mutex, it's exactly the same
if we process targets in sequence without starting goroutines.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-20 19:28:08 +00:00
Łukasz Mierzwa 870627fbed Add enable_compression scrape config option
Currently Prometheus will always request gzip compression from the target when sending scrape requests.
HTTP compression does reduce the amount of bytes sent over the wire and so is often desirable.
The downside of compression is that it requires extra resources - cpu & memory.

This also affects the resource usage on the target since it has to compress the response
before sending it to Prometheus.

This change adds a new option to the scrape job configuration block: enable_compression.
The default is true so it remains the same as current Prometheus behaviour.

Setting this option to false allows users to disable compression between Prometheus
and the scraped target, which will require more bandwidth but it lowers the resource
usage of both Prometheus and the target.

Fixes #12319.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-11-20 12:02:55 +00:00
zenador 32ee1b15de
Fix error on ingesting out-of-order exemplars (#13021)
Fix and improve ingesting exemplars for native histograms.

See code comment for a detailed explanation of the algorithm.

Note that this changes the current behavior for all kind of samples slightly: We now allow exemplars with the same timestamp as during the last scrape if the value or the labels have changed.

Also note that we now do not ingest exemplars without timestamps for native histograms anymore.

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>

---------

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: zenador <zenador@users.noreply.github.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
2023-11-16 15:07:37 +01:00
Ziqi Zhao 10ebeb0a62 rename lastSum -> lastCount && enrich test case with expectBucketCount and expectSchema
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-16 13:00:11 +08:00
Ziqi Zhao b94f32f6fa automatically reduce resolution rather than fail scrape
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-11 22:24:47 +08:00
Julien Pivotto 0fe34f6d78 Follow-up to #13060: Add test to ensure staleness tracking
This commit introduces an additional test in `scrape_test.go` to verify
staleness tracking when `trackTimestampStaleness` is enabled. The new
`TestScrapeLoopAppendStalenessIfTrackTimestampStaleness` function
asserts that the scrape loop correctly appends staleness markers when
necessary, reflecting the expected behavior with the feature flag turned
on.

The previous tests were only testing end of scrape staleness.

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-11-09 10:20:35 -06:00
Matthieu MOREL fe057fc60d use Go standard errors package
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-11-03 07:26:31 +00:00
Matthieu MOREL 7eaefcf379
ci(lint): enable errorlint on scrape (#12923)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
2023-11-01 20:06:46 +01:00
George Krajcsovits e399395b01
Native histograms vs labels (#13005)
* Document le and quantile label transition due to native histograms

Fixes: #12984

For full explanation see the related issue. The le and quantile labels
are formatted as float with trailing .0 for whole number values when
native histograms is enabled, e.g. 10.0. This changes the resulting series
in Prometheus if previously we scraped the whole number itself, e.g. 10
over the text format.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>

---------

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
2023-11-01 18:30:34 +01:00
Björn Rabenstein a43669e611
Merge pull request #12928 from alexandear/ci-enable-godot
ci(lint): enable godot; append dot at the end of comments
2023-11-01 17:15:41 +01:00
Julien Pivotto 84aadfc45b scrape: Added trackTimestampsStaleness configuration option
Add the ability to track staleness when an explicit timestamp is set.
Useful for cAdvisor.

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-10-31 16:58:42 -04:00
Oleksandr Redko fa90ca46e5 ci(lint): enable godot; append dot at the end of comments
Signed-off-by: Oleksandr Redko <Oleksandr_Redko@epam.com>
2023-10-31 19:53:38 +02:00
Oleksandr Redko 8e5f0387a2
ci(lint): enable nolintlint and remove redundant comments (#12926)
Signed-off-by: Oleksandr Redko <Oleksandr_Redko@epam.com>
2023-10-31 12:35:13 +01:00
Paulin Todev 5752050b42
Scrape metrics can now be registered with a non-default registry.
* A registerer is passed to the scrape Manager,
and all scrape metrics register with it.
* For now the registry which we pass to the scrape
Manager is still the global one.

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
2023-10-11 16:19:00 +01:00
Bartlomiej Plotka 624b973ebf
Added ability to specify scrape protocols to accept during HTTP content type negotiation. (#12738)
* Added ability to specify scrape protocols to accept during HTTP content type negotiation.


This is done via new option in GlobalConfig and ScrapeConfig: "scrape_protocol"

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Fixed readability and log message.

Signed-off-by: bwplotka <bwplotka@gmail.com>

---------

Signed-off-by: bwplotka <bwplotka@gmail.com>
2023-10-10 11:16:55 +01:00
Bryan Boreham f6d9c84fde
scraping: delay creating buffer, to save memory (#12953)
We don't need the buffer to read the response until the scrape http call
returns; creating it earlier makes the buffer pool larger.

I split `scrape()` into `scrape()` which returns with the http response,
and `readResponse()` which decompresses and copies the data into the
supplied buffer. This design was chosen to minimize impact on the logic.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-10-09 17:23:53 +01:00
Bryan Boreham 7c934ae18c scraping: hoist labels variable to save garbage
`lset` escapes to heap due to being passed through the text-parser
interface, so we can reduce garbage by hoisting it out of the loop so
only one allocation is done for every series in a scrape.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-10-05 11:04:59 +00:00
Goutham Veeramachaneni 86729d4d7b
Update exp package (#12650) 2023-09-21 22:53:51 +02:00
Arve Knudsen 6daee89e5f
Add context argument to Querier.Select (#12660)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-12 12:37:38 +02:00
Bryan Boreham d73b4acb30
Merge pull request #12737 from prometheus/beorn7/histogram
textparse: fix infinite loop during exemplar parsing
2023-08-23 09:36:56 +01:00
György Krajcsovits 983c0c5e9d Add missing buckets
My previous proposal for a fix was wrong and also missed these.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 14:44:53 +02:00
György Krajcsovits 2ae8c2bd3d Set expected values in test
The parsing doesn't seem to be perfect as I don't get all classic buckets
possibly another bug found?

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 13:55:13 +02:00
György Krajcsovits 2a781ec5ac Replicate infinite loop in native-classic histogram scrape
Enable scraping a native histogram with exemplars that leads to
infinite loop.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 13:12:45 +02:00
Bryan Boreham 611f50bb3d scrape: retain all dropped targets when KeepDroppedTargets is zero
This was a bug.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-20 14:32:23 +01:00
Bryan Boreham 627c99424b scrape: extend TestDroppedTargetsList to check counts
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-20 14:32:23 +01:00
Bryan Boreham 1e3fef6ab0
scraping: limit detail on dropped targets, to save memory (#12647)
It's possible (quite common on Kubernetes) to have a service discovery
return thousands of targets then drop most of them in relabel rules.
The main place this data is used is to display in the web UI, where
you don't want thousands of lines of display.

The new limit is `keep_dropped_targets`, which defaults to 0
for backwards-compatibility.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-14 15:39:25 +01:00
beorn7 536a487af4 scrape: Refactor names of float samples
Continue to remove confusion that histogram samples are also samples
and histogram values are also values etc. by renaming float values and
float samples using the same schema as for histograms.

Concretely:
- result → resultFloats (corresponding to resultHistograms)
- pendingResult → pendingFloats (corresponding to pendingHistograms)
- rolledbackResult → rolledbackFloats (corresponding to rolledbackHistograms)
- sample → floatSample (corresponding to histogramSample)

This also order the fields in `collectResultAppender` more
consistently.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-07-13 14:27:51 +02:00
beorn7 0e3f35324b scrape: Enable ingestion of multiple exemplars per sample
This has become a requirement for native histograms, as a single
histogram sample commonly has many buckets, so that providing many
exemplars makes sense.

Since OM text doesn't support native histograms yet, the test had to
be expanded to also support protobuf test cases.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-07-13 14:16:10 +02:00
Bryan Boreham 5255bf06ad Replace sort.Slice with faster slices.SortFunc
The generic version is more efficient.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-07-02 22:17:08 +00:00
Julius Volz ac8abdaacd
Rename remaining jitterSeed -> offsetSeed variables (#12414)
I had changed the naming from "jitter" to "offset" in:

cb045c0e4b

...but I forgot to add this file to the commit to complete the renaming,
doing that now.

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2023-06-05 17:36:11 +02:00
Julius Volz cb045c0e4b Fix wording from "jitterSeed" -> "offsetSeed" for server-wide scrape offsets
In digital communication, "jitter" usually refers to how much a signal deviates
from true periodicity, see https://en.wikipedia.org/wiki/Jitter. The way we are
using the "jitterSeed" in Prometheus does not affect the true periodicity at
all, but just introduces a constant phase shift (or offset) within the period.
So it would be more correct and less confusing to call the "jitterSeed" an
"offsetSeed" instead.

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2023-05-25 11:54:00 +02:00
beorn7 9e500345f3 textparse/scrape: Add option to scrape both classic and native histograms
So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.

This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.

Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:

1. Series created from classic _gauge_ histograms didn't get the
   _sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-05-13 01:32:25 +02:00
Björn Rabenstein bd98fc8c45
Merge pull request #12254 from zenador/histogram-bucket-limit
Implement bucket limit for native histograms
2023-05-10 17:42:29 +02:00
Jeanette Tan 40240c9c1c Update according to code review
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-05-05 02:33:00 +08:00
György Krajcsovits 19a4f314f5 Refactor testutil/protobuf.go into scrape package
Renamed to clientprotobuf.go and added comments to indicate the
intended usage.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-05-04 08:36:44 +02:00
Russ Cox 28f5502828 scrape: fix two loop variable scoping bugs in test
Consider code like:

	for i := 0; i < numTargets; i++ {
		stopFuncs = append(stopFuncs, func() {
			time.Sleep(i*20*time.Millisecond)
		})
	}

Because the loop variable i is shared by all closures,
all the stopFuncs sleep for numTargets*20 ms.

If the i were made per-iteration, as we are considering
for a future Go release, the stopFuncs would have sleep
durations ranging from 0 to (numTargets-1)*20 ms.

Two tests had code like this and were checking that the
aggregate sleep was at least numTargets*20 ms
("at least as long as the last target slept"). This is only true
today because i == numTarget during all the sleeps.

To keep the code working even if the semantics of this loop
change, this PR computes

	d := time.Duration((i+1)*20) * time.Millisecond

outside the closure (but inside the loop body), and then each
closure has its own d. Now the sleeps range from 20 ms
to numTargets*20 ms, keeping the test passing
(and probably behaving closer to the intent of the test author).

The failure being fixed can be reproduced by using the current
Go development branch with

	GOEXPERIMENT=loopvar go test

Signed-off-by: Russ Cox <rsc@golang.org>
2023-04-26 10:33:10 -04:00
Jeanette Tan dfabc69303 Add tests according to code review
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-25 02:07:36 +08:00
Jeanette Tan 2ad39baa72 Treat bucket limit like sample limit and make it fail the whole scrape and return an error
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-22 03:25:07 +08:00
György Krajcsovits 071426f72f Add unit test for bucket limit appender
Refactors textparser test to use a common test utility to create
protobuf representation from MetricFamily

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-04-22 03:14:19 +08:00
Jeanette Tan 4d21ac23e6 Implement bucket limit for native histograms
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-22 03:14:19 +08:00
Matthieu MOREL bae9a21200
Merge branch 'main' into linter/nilerr
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-19 19:56:39 +02:00
beorn7 5b53aa1108 style: Replace else if cascades with switch
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.

The exceptions that I have found in our codebase are just these two:

* The `if else` is followed by an additional statement before the next
  condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
  used. In this case, using `switch` would require tagging the `for`
  loop, which probably tips the balance.

Why are `switch` statements more readable?

For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.

I'm sure the aforemention wise coders can list even more reasons.

In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:22:31 +02:00
beorn7 c3c7d44d84 lint: Adjust to the lint warnings raised by current versions of golint-ci
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.

There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).

I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.

I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-04-19 17:10:10 +02:00
Matthieu MOREL fb3eb21230 enable gocritic, unconvert and unused linters
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-04-13 19:20:22 +00:00
Bryan Boreham b987afa7ef labels: simplify call to get Labels from Builder
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.

Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.

Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-22 17:05:20 +00:00
Bryan Boreham 0c09c3feb0 scrape sync: avoid copy of labels for dropped targets
Since the Target object was just created in this function, nobody else
has a reference to it and there are no concerns about it being modified
concurrently so we don't need to copy the value.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-16 20:35:13 +00:00
Bryan Boreham 0dfa1e73f8 scrape: use LabelsRange instead of Labels, for performance
Includes a rewrite of `resolveConflictingExposedLabels` to use
`labels.Builder.Get`, which simplifies it considerably.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-16 20:35:13 +00:00
Bryan Boreham 2fde2fb37d scrape: add Target.LabelsRange
This allows users of a Target to iterate labels without allocating heap memory.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-16 20:35:13 +00:00
Bryan Boreham b96b89ef8b
Merge pull request #12048 from bboreham/faster-targets
Scraping targets are synced by creating the full set, then adding/removing any which have changed.
This PR speeds up the process of creating the full set.

I added a benchmark for `TargetsFromGroup`; it uses configuration from a typical Kubernetes SD.

The crux of the change is to do relabeling inside labels.Builder instead of converting to labels.Labels and back again for every rule. The change is broken into several commits for easier review.

This is a breaking change to `scrape.PopulateLabels()`, but `relabel.Process` is left as-is, with a new `relabel.ProcessBuilder` option.
2023-03-09 11:10:01 +00:00
Julien Pivotto 1fd59791e1 Update tests
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-03-08 16:32:39 +01:00
Julien Pivotto 0c56e5d014 Update our own dependencies, support proxy from env
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-03-08 12:00:17 +01:00
Bryan Boreham f4fd9b0d68 scrape: re-use memory in TargetsFromGroup
Common service discovery mechanisms such as Kubernetes can generate a
lot of target groups, so this function was allocating a lot of memory
which then immediately became garbage. Re-using the structures across
an entire Sync saves effort.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham 5cfe759348 scrape: make TargetsFromGroup work with Builder not []Label
Save work converting to `Labels` then to `Builder`.
`PopulateLabels()` now takes as Builder as input.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham c1dbc7b838 scrape: make PopulateLabels work with Builder not Labels
Save work converting to and fro.

Uses the recently-added relabel.ProcessBuilder variant.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 17:21:37 +00:00
Bryan Boreham 95fc032a61 scrape: add benchmark for TargetsFromGroup
`loadConfiguration` is made more general.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-03-07 09:46:19 +00:00
Julien Pivotto 599b70a05d Add include scrape configs
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-03-06 23:35:39 +01:00
Jimmie Han a13249a98f scrape: fix prometheus_target_scrape_pool_target_limit metric not set on creating scrape pool (#12001)
Signed-off-by: Jimmie Han <hanjinming@outlook.com>
2023-02-21 13:14:04 +08:00
Bryan Boreham 75e5d600d9
Merge pull request #11748 from bboreham/safe-scrape
scrape: remove unsafe code
2023-01-16 17:57:12 +00:00
Bryan Boreham d228d1d9cc scrape: remove 'mets' string completely
This makes all usage of maps in scrape.go consistent.

Also remove comment about unsafe strings, since we don't use them any
more in this package.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-01-04 12:05:58 +00:00
Fish-pro 6ed71a229e Use errors.Is to check for a specific error
Signed-off-by: Fish-pro <zechun.chen@daocloud.io>
2022-12-29 23:23:07 +08:00
Marc Tudurí 9474610baf
Support FloatHistogram in TSDB (#11522)
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.

Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
2022-12-28 14:25:07 +05:30
Łukasz Mierzwa e1b7082008
Show individual scrape pools on /targets page (#11142)
* Add API endpoints for getting scrape pool names

This adds api/v1/scrape_pools endpoint that returns the list of *names* of all the scrape pools configured.
Having it allows to find out what scrape pools are defined without having to list and parse all targets.

The second change is adding scrapePool query parameter support in api/v1/targets endpoint, that allows to
filter returned targets by only finding ones for passed scrape pool name.

Both changes allow to query for a specific scrape pool data, rather than getting all the targets for all possible scrape pools.
The problem with api/v1/targets endpoint is that it returns huge amount of data if you configure a lot of scrape pools.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>

* Add a scrape pool selector on /targets page

Current targets page lists all possible targets. This works great if you only have a few scrape pools configured,
but for systems with a lot of scrape pools and targets this slow things down a lot.
Not only does the /targets page load very slowly in such case (waiting for huge API response) but it also take
a long time to render, due to huge number of elements.
This change adds a dropdown selector so it's possible to select only intersting scrape pool to view.
There's also scrapePool query param that will open selected pool automatically.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2022-12-23 11:55:08 +01:00