Commit graph

167 commits

Author SHA1 Message Date
Oleksandr Redko f10c3454e9 Enable perfsprint linter and fix up code
Signed-off-by: Oleksandr Redko <oleksandr.red+github@gmail.com>
2024-05-15 17:51:05 +03:00
Arthur Silva Sens 7aacef9b42
bugfix: Decouple native histogram ingestions and protobuf parsing
Up until this point, if a scrape was done with the protobuf format Prometheus would always try to ingest native histograms even with the feature flag disabled. This causes problems with other feature-flags that depend on the protobuf format, like 'created-timestamp-zero-ingestion'. This commit decouples native histogram parsing from ingestion, making sure ingestion only happens when the 'native-histogram' feature-flag is enabled.

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
2024-04-24 17:02:52 -03:00
Ziqi Zhao 64dfd8a158
fix the bug of setting native histogram min bucket factor (#13846)
* fix the bug of setting native histogram min bucket factor

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>

* Add unit test for checking that min_bucket_factor is correctly applied

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>

---------

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2024-03-27 16:32:37 +01:00
Łukasz Mierzwa 21f8b35f5b Move staleness tracking out of checkAddError() calls
This call bloats checkAddError signature and logic, we can and should call it from the main scrape logic.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Łukasz Mierzwa 55dcaab41b Fix TestScrapeLoopDiscardDuplicateLabels test
This test calls Rollback() which is normally called from within append code.
Doing so means that staleness tracking data is outdated and need to by cycled manually.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Łukasz Mierzwa 50c81bed86 Check for duplicated series on a scrape
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that.
This change adds a test for that and fixes the scrape loop so that:

- Only first sample for each unique time series is appended
- Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric

This allows one to identify such scrape jobs and targets.

Benchmark results:

```
name                            old time/op    new time/op    delta
ScrapeLoopAppend-8                64.8µs ± 2%    71.1µs ±20%   +9.75%  (p=0.000 n=10+10)
ScrapeLoopAppendOM-8              64.2µs ± 1%    68.5µs ± 7%   +6.71%  (p=0.000 n=9+10)
TargetsFromGroup/1_targets-8      14.2µs ± 1%    14.5µs ± 1%   +1.99%  (p=0.000 n=10+10)
TargetsFromGroup/10_targets-8      149µs ± 1%     152µs ± 1%   +2.05%  (p=0.000 n=9+10)
TargetsFromGroup/100_targets-8    1.49ms ± 4%    1.48ms ± 1%     ~     (p=0.796 n=10+10)

name                            old alloc/op   new alloc/op   delta
ScrapeLoopAppend-8                19.9kB ± 1%    17.8kB ± 3%  -10.23%  (p=0.000 n=8+10)
ScrapeLoopAppendOM-8              19.9kB ± 1%    18.3kB ±10%   -8.14%  (p=0.001 n=9+10)
TargetsFromGroup/1_targets-8      2.43kB ± 0%    2.43kB ± 0%   -0.15%  (p=0.045 n=10+10)
TargetsFromGroup/10_targets-8     24.3kB ± 0%    24.3kB ± 0%     ~     (p=0.083 n=10+9)
TargetsFromGroup/100_targets-8     243kB ± 0%     243kB ± 0%     ~     (p=0.720 n=9+10)

name                            old allocs/op  new allocs/op  delta
ScrapeLoopAppend-8                  9.00 ± 0%      9.00 ± 0%     ~     (all equal)
ScrapeLoopAppendOM-8                10.0 ± 0%      10.0 ± 0%     ~     (all equal)
TargetsFromGroup/1_targets-8        40.0 ± 0%      40.0 ± 0%     ~     (all equal)
TargetsFromGroup/10_targets-8        400 ± 0%       400 ± 0%     ~     (all equal)
TargetsFromGroup/100_targets-8     4.00k ± 0%     4.00k ± 0%     ~     (all equal)
```

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Łukasz Mierzwa 1a8ea78207 Fix BenchmarkScrapeLoopAppendOM
OpenMetrics requires EOF comment at the end of metrics body, but the makeTestMetrics() function doesn't append it.
This means this benchmark tests a response with errors but I don't think that was the intention.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-27 11:36:16 +00:00
Bryan Boreham 5f50d974c9 scraping: reset symbol table periodically
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Bryan Boreham abb3a62f04 scraping: re-use symbol table for scrape loops
One symbol table for all loops in the same scrape pool, i.e. from the
same job.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-26 11:45:25 +00:00
Łukasz Mierzwa 92e381b8a3 Add a scrape benchmark with gzipped responses
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2024-02-22 17:07:22 +00:00
Owen Williams a28d7865ad UTF-8: Add support for parsing UTF8 metric and label names
This adds support for the new grammar of `{"metric_name", "l1"="val"}` to promql and some of the exposition formats.
This grammar will also be valid for non-UTF-8 names.
UTF-8 names will not be considered valid unless model.NameValidationScheme is changed.

This does not update the go expfmt parser in text_parse.go, which will be addressed by https://github.com/prometheus/common/issues/554/.

Part of https://github.com/prometheus/prometheus/issues/13095

Signed-off-by: Owen Williams <owen.williams@grafana.com>
2024-02-15 14:34:37 -05:00
Bryan Boreham d0dee51aac scrape tests: check NaN values directly
Normally, a NaN value is never equal to any other value. Compare sample
values via `Float64bits` so that NaN values which are exactly the same
will compare equal.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-08 19:30:20 +00:00
Bryan Boreham 39af788dbd Tests: use replacement DeepEquals using go-cmp
Use DeepEqual replacement using go-cmp, which is more flexible.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-02-08 19:30:20 +00:00
Paweł Szulik b0c538787d Refactor scrape tests to use testify.
Signed-off-by: Paweł Szulik <paul.szulik@gmail.com>
2024-02-01 13:51:31 +00:00
Bryan Boreham 4ad9b6df2e
Merge pull request #13336 from machine424/flakky
scrape_test.go: Increase scrape interval in TestScrapeLoopCache
to reduce potential flakiness.
2024-01-18 14:12:55 +00:00
Ziqi Zhao df2a0ecf3b
Native Histograms: support native_histogram_min_bucket_factor in scrape_config (#13222)
Native Histograms: support native_histogram_min_bucket_factor in scrape_config

---------

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
2024-01-17 16:58:54 +01:00
machine424 2f60177203
scrape_test.go: Increase scrape interval in TestScrapeLoopCache to reduce potential flakiness
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
2023-12-27 19:25:12 +01:00
Bryan Boreham 8065bef172 Move metric type definitions to common/model
They are used in multiple repos, so common is a better place for them.
Several packages now don't depend on `model/textparse`, e.g.
`storage/remote`.

Also remove `metadata` struct from `api.go`, since it was identical to
a struct in the `metadata` package.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-12-19 18:56:54 +00:00
Arthur Silva Sens 5082655392
Append Created Timestamps (#12733)
* Append created timestamps.

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Log when created timestamps are ignored

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Proposed changes to Append CT PR.

Changes:

* Changed textparse Parser interface for consistency and robustness.
* Changed CT interface to be more explicit and handle validation.
* Simplified test, change scrapeManager to allow testability.
* Added TODOs.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Updates.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Addressed comments.

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Refactor head_appender test

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Fix linter issues

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

* Use model.Sample in head appender test

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>

---------

Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
Signed-off-by: bwplotka <bwplotka@gmail.com>
Co-authored-by: bwplotka <bwplotka@gmail.com>
2023-12-11 08:43:42 +00:00
Matthieu MOREL 9c4782f1cc
golangci-lint: enable testifylint linter (#13254)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-12-07 11:35:01 +00:00
Björn Rabenstein 980e2895a2
Merge pull request #13129 from fatsheep9146/reduce-resolution-automatically
Native Histograms: automatically reduce resolution rather than fail scrape
2023-11-28 17:26:36 +01:00
Julien Pivotto eda73dd3e5
Merge pull request #13187 from bboreham/refactor-newscrapeloop
Scraping tests: refactor scrapeLoop creation
2023-11-24 19:48:44 +01:00
Bryan Boreham 3e287e0170 Scraping tests: refactor scrapeLoop creation
Pull boilerplate code into a function. Where appropriate we set some
config on the returned object.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-24 17:28:09 +00:00
Bryan Boreham 784a2d2c74
Merge pull request #12992 from bboreham/single-scrape-buffer-pool
Scraping: share buffer pool across all scrapes
2023-11-24 16:26:19 +00:00
Paulin Todev 0102425af1
Use only one scrapeMetrics object per test. (#13051)
The scrape loop and scrape cache should use the same instance.
This brings the tests' behavior more in line with production.

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
2023-11-23 11:24:08 +00:00
Bryan Boreham 9051100aba Scraping: share buffer pool across all scrapes
Previously we had one per scrapePool, and one of those per configured
scraping job. Each pool holds a few unused buffers, so sharing one
across all scrapePools reduces total heap memory.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-11-23 10:23:34 +00:00
Ziqi Zhao 8fe9250f7d optimize the logic of break the loop of reducing resolution
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
2023-11-21 16:56:56 +08:00
Łukasz Mierzwa 870627fbed Add enable_compression scrape config option
Currently Prometheus will always request gzip compression from the target when sending scrape requests.
HTTP compression does reduce the amount of bytes sent over the wire and so is often desirable.
The downside of compression is that it requires extra resources - cpu & memory.

This also affects the resource usage on the target since it has to compress the response
before sending it to Prometheus.

This change adds a new option to the scrape job configuration block: enable_compression.
The default is true so it remains the same as current Prometheus behaviour.

Setting this option to false allows users to disable compression between Prometheus
and the scraped target, which will require more bandwidth but it lowers the resource
usage of both Prometheus and the target.

Fixes #12319.

Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
2023-11-20 12:02:55 +00:00
zenador 32ee1b15de
Fix error on ingesting out-of-order exemplars (#13021)
Fix and improve ingesting exemplars for native histograms.

See code comment for a detailed explanation of the algorithm.

Note that this changes the current behavior for all kind of samples slightly: We now allow exemplars with the same timestamp as during the last scrape if the value or the labels have changed.

Also note that we now do not ingest exemplars without timestamps for native histograms anymore.

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>

---------

Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: zenador <zenador@users.noreply.github.com>
Co-authored-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
2023-11-16 15:07:37 +01:00
Julien Pivotto 0fe34f6d78 Follow-up to #13060: Add test to ensure staleness tracking
This commit introduces an additional test in `scrape_test.go` to verify
staleness tracking when `trackTimestampStaleness` is enabled. The new
`TestScrapeLoopAppendStalenessIfTrackTimestampStaleness` function
asserts that the scrape loop correctly appends staleness markers when
necessary, reflecting the expected behavior with the feature flag turned
on.

The previous tests were only testing end of scrape staleness.

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-11-09 10:20:35 -06:00
Matthieu MOREL 7eaefcf379
ci(lint): enable errorlint on scrape (#12923)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
2023-11-01 20:06:46 +01:00
George Krajcsovits e399395b01
Native histograms vs labels (#13005)
* Document le and quantile label transition due to native histograms

Fixes: #12984

For full explanation see the related issue. The le and quantile labels
are formatted as float with trailing .0 for whole number values when
native histograms is enabled, e.g. 10.0. This changes the resulting series
in Prometheus if previously we scraped the whole number itself, e.g. 10
over the text format.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>

---------

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: George Krajcsovits <krajorama@users.noreply.github.com>
2023-11-01 18:30:34 +01:00
Julien Pivotto 84aadfc45b scrape: Added trackTimestampsStaleness configuration option
Add the ability to track staleness when an explicit timestamp is set.
Useful for cAdvisor.

Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
2023-10-31 16:58:42 -04:00
Paulin Todev 5752050b42
Scrape metrics can now be registered with a non-default registry.
* A registerer is passed to the scrape Manager,
and all scrape metrics register with it.
* For now the registry which we pass to the scrape
Manager is still the global one.

Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
2023-10-11 16:19:00 +01:00
Bartlomiej Plotka 624b973ebf
Added ability to specify scrape protocols to accept during HTTP content type negotiation. (#12738)
* Added ability to specify scrape protocols to accept during HTTP content type negotiation.


This is done via new option in GlobalConfig and ScrapeConfig: "scrape_protocol"

Signed-off-by: bwplotka <bwplotka@gmail.com>

* Fixed readability and log message.

Signed-off-by: bwplotka <bwplotka@gmail.com>

---------

Signed-off-by: bwplotka <bwplotka@gmail.com>
2023-10-10 11:16:55 +01:00
Bryan Boreham f6d9c84fde
scraping: delay creating buffer, to save memory (#12953)
We don't need the buffer to read the response until the scrape http call
returns; creating it earlier makes the buffer pool larger.

I split `scrape()` into `scrape()` which returns with the http response,
and `readResponse()` which decompresses and copies the data into the
supplied buffer. This design was chosen to minimize impact on the logic.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-10-09 17:23:53 +01:00
Arve Knudsen 6daee89e5f
Add context argument to Querier.Select (#12660)
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
2023-09-12 12:37:38 +02:00
Bryan Boreham d73b4acb30
Merge pull request #12737 from prometheus/beorn7/histogram
textparse: fix infinite loop during exemplar parsing
2023-08-23 09:36:56 +01:00
György Krajcsovits 983c0c5e9d Add missing buckets
My previous proposal for a fix was wrong and also missed these.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 14:44:53 +02:00
György Krajcsovits 2ae8c2bd3d Set expected values in test
The parsing doesn't seem to be perfect as I don't get all classic buckets
possibly another bug found?

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 13:55:13 +02:00
György Krajcsovits 2a781ec5ac Replicate infinite loop in native-classic histogram scrape
Enable scraping a native histogram with exemplars that leads to
infinite loop.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-08-21 13:12:45 +02:00
Bryan Boreham 627c99424b scrape: extend TestDroppedTargetsList to check counts
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-20 14:32:23 +01:00
Bryan Boreham 1e3fef6ab0
scraping: limit detail on dropped targets, to save memory (#12647)
It's possible (quite common on Kubernetes) to have a service discovery
return thousands of targets then drop most of them in relabel rules.
The main place this data is used is to display in the web UI, where
you don't want thousands of lines of display.

The new limit is `keep_dropped_targets`, which defaults to 0
for backwards-compatibility.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2023-08-14 15:39:25 +01:00
beorn7 536a487af4 scrape: Refactor names of float samples
Continue to remove confusion that histogram samples are also samples
and histogram values are also values etc. by renaming float values and
float samples using the same schema as for histograms.

Concretely:
- result → resultFloats (corresponding to resultHistograms)
- pendingResult → pendingFloats (corresponding to pendingHistograms)
- rolledbackResult → rolledbackFloats (corresponding to rolledbackHistograms)
- sample → floatSample (corresponding to histogramSample)

This also order the fields in `collectResultAppender` more
consistently.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-07-13 14:27:51 +02:00
beorn7 0e3f35324b scrape: Enable ingestion of multiple exemplars per sample
This has become a requirement for native histograms, as a single
histogram sample commonly has many buckets, so that providing many
exemplars makes sense.

Since OM text doesn't support native histograms yet, the test had to
be expanded to also support protobuf test cases.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-07-13 14:16:10 +02:00
beorn7 9e500345f3 textparse/scrape: Add option to scrape both classic and native histograms
So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.

This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.

Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:

1. Series created from classic _gauge_ histograms didn't get the
   _sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.

Signed-off-by: beorn7 <beorn@grafana.com>
2023-05-13 01:32:25 +02:00
Björn Rabenstein bd98fc8c45
Merge pull request #12254 from zenador/histogram-bucket-limit
Implement bucket limit for native histograms
2023-05-10 17:42:29 +02:00
György Krajcsovits 19a4f314f5 Refactor testutil/protobuf.go into scrape package
Renamed to clientprotobuf.go and added comments to indicate the
intended usage.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
2023-05-04 08:36:44 +02:00
Russ Cox 28f5502828 scrape: fix two loop variable scoping bugs in test
Consider code like:

	for i := 0; i < numTargets; i++ {
		stopFuncs = append(stopFuncs, func() {
			time.Sleep(i*20*time.Millisecond)
		})
	}

Because the loop variable i is shared by all closures,
all the stopFuncs sleep for numTargets*20 ms.

If the i were made per-iteration, as we are considering
for a future Go release, the stopFuncs would have sleep
durations ranging from 0 to (numTargets-1)*20 ms.

Two tests had code like this and were checking that the
aggregate sleep was at least numTargets*20 ms
("at least as long as the last target slept"). This is only true
today because i == numTarget during all the sleeps.

To keep the code working even if the semantics of this loop
change, this PR computes

	d := time.Duration((i+1)*20) * time.Millisecond

outside the closure (but inside the loop body), and then each
closure has its own d. Now the sleeps range from 20 ms
to numTargets*20 ms, keeping the test passing
(and probably behaving closer to the intent of the test author).

The failure being fixed can be reproduced by using the current
Go development branch with

	GOEXPERIMENT=loopvar go test

Signed-off-by: Russ Cox <rsc@golang.org>
2023-04-26 10:33:10 -04:00
Jeanette Tan dfabc69303 Add tests according to code review
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
2023-04-25 02:07:36 +08:00