Fabian Reinartz
3d8661b8d5
Add comment
2017-05-24 17:05:42 +02:00
Fabian Reinartz
43ca652217
retrieval: Don't allocate map on every scrape
2017-05-24 16:23:48 +02:00
Fabian Reinartz
d3f662f15e
Merge branch 'dev-2.0' into grobie/reduce-noisy-append-errors
2017-05-24 15:29:30 +02:00
Fabian Reinartz
d289dc55c3
storage: update TSDB
2017-05-22 11:53:08 +02:00
Brian Brazil
0920972f79
Initilise scraped sample map, and rename to series map.
2017-05-16 18:33:51 +01:00
Brian Brazil
bf38963118
Plumb through logger with target field to scrape loop.
2017-05-16 18:33:51 +01:00
Brian Brazil
d657d722dc
Log count of dupliates/out of order samples as warnings.
...
Keep log of each sample as debug log.
2017-05-16 18:33:51 +01:00
Brian Brazil
8b9d3e7547
Put end of run staleness handler in seperate function.
...
Improve log message.
2017-05-16 18:33:51 +01:00
Brian Brazil
d532272520
Add stalemarkers to synthetic series too when target stops.
2017-05-16 18:33:51 +01:00
Brian Brazil
b87d3ca9ea
Create stale markers when a target is stopped.
...
When a target is no longer returned from SD stop()
is called. However it may be recreated before the
next scrape interval happens. So we wait to set stalemarkers
until the scrape of the new target would have happened
and been ingested, which is 2 scrape intervals.
If we're shutting down the context will be cancelled,
so return immediately rather than holding things up for potentially
minutes waiting to safely set stalemarkers no newer than now.
If the server starts immediately back up again all is well.
If not, we're missing some stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
95162ebc16
Add log messages for out of order samples
2017-05-16 18:33:51 +01:00
Brian Brazil
3c45400130
Don't fail scrape if one sample violates ordering.
...
In Prometheus 1.x one sample that is out of order
or that has a duplicate timestamp is discarded, and
the rest of the scrape ingestion continues on.
This will now also be true for 2.0.
2017-05-16 18:33:51 +01:00
Brian Brazil
fd5c5a50a3
Add stale markers on parse error.
...
If we fail to parse the result of a scrape,
we should treat that as a failed scrape and
add stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
c0c7e32e61
Treat a failed scrape as an empty scrape for staleness.
...
If a target has died but is still in SD, we want the previously
scraped values to go stale. This would also apply to brief blips.
2017-05-16 18:33:51 +01:00
Brian Brazil
850ea412ad
If an explicit timestamp is provided, bypass staleness.
2017-05-16 18:33:51 +01:00
Brian Brazil
a5cf25743c
Move stalness check into a function
2017-05-16 18:33:51 +01:00
Brian Brazil
4f35952cf3
Inject a stale NaN when sample disappears between scrapes.
2017-05-16 18:33:51 +01:00
Brian Brazil
beaa7d5a43
Move consistent NaN logic into the parser.
2017-05-16 18:33:51 +01:00
Brian Brazil
76acf7b9b1
Ensure all the NaNs we ingest have the same bit pattern.
2017-05-16 18:33:51 +01:00
Brian Brazil
0eabed8048
Remove unused metric
2017-05-15 15:06:54 +01:00
Fabian Reinartz
76b3378190
retrieval: add missing scrape context cancelation
2017-05-11 17:20:03 +02:00
Tobias Schmidt
368206d2f5
Handle errSeriesDropped correctly
...
If metrics_relabel_configs are used to drop metrics, an errSeriesDropped
is returned. This shouldn't be used to return an error at the end of a
append() call.
2017-05-05 14:58:36 +02:00
Fabian Reinartz
e829dbe2be
retrieval: comment out accept header again
2017-04-27 11:46:08 +02:00
Fabian Reinartz
73b8ff0ddc
Merge branch 'master' into dev-2.0
2017-04-27 10:19:55 +02:00
Matt Layher
5e4f5fb5ad
retrieval: make scrape timeout header consistent with others
2017-04-05 14:56:22 -04:00
Alexey Palazhchenko
17f15d024a
Small fixes. ( #2578 )
...
Fix typos. Simplify with gofmt -s
2017-04-05 14:24:22 +01:00
Matt Layher
fe4b6693f7
retrieval: add Scrape-Timeout-Seconds header to each scrape request ( #2565 )
...
Fixes #2508 .
2017-04-04 18:26:28 +01:00
Fabian Reinartz
8ffc851147
Merge branch 'master' into dev-2.0
2017-04-04 15:17:56 +02:00
Julius Volz
815762a4ad
Move retrieval.NewHTTPClient -> httputil.NewClientFromConfig
2017-03-20 14:17:04 +01:00
Fabian Reinartz
c389193b37
Merge branch 'master' into dev-2.0
2017-03-17 16:27:07 +01:00
Fabian Reinartz
d9fb57cde4
*: Simplify []byte to string unsafe conversion
2017-03-07 11:41:11 +01:00
Erdem Agaoglu
8809735d7f
Setting User-Agent header ( #2447 )
2017-02-28 09:59:33 -04:00
Fabian Reinartz
cc0ff26f1f
retrieval: handle GZIP compression ourselves
...
The automatic GZIP handling of net/http does not preserve
buffers across requests and thus generates a lot of garbage.
We handle GZIP ourselves to circumvent this.t
2017-02-22 13:25:25 +01:00
Fabian Reinartz
5772f1a7ba
retrieval/storage: adapt to new interface
...
This simplifies the interface to two add methods for
appends with labels or faster reference numbers.
2017-02-02 13:05:46 +01:00
Fabian Reinartz
1d3cdd0d67
Merge branch 'master' into dev-2.0-rebase
2017-01-30 17:43:01 +01:00
Fabian Reinartz
598e2f01c0
retrieval: don't erronously break appending
2017-01-17 08:39:18 +01:00
Fabian Reinartz
c691895a0f
retrieval: cache series references, use pkg/textparse
...
With this change the scraping caches series references and only
allocates label sets if it has to retrieve a new reference.
pkg/textparse is used to do the conditional parsing and reduce
allocations from 900B/sample to 0 in the general case.
2017-01-16 12:03:57 +01:00
Fabian Reinartz
ad9bc62e4c
storage: extend appender and adapt it
2017-01-13 14:48:01 +01:00
Björn Rabenstein
ad40d0abbc
Merge pull request #2288 from prometheus/limit-scrape
...
Add ability to limit scrape samples, and related metrics
2017-01-08 01:34:06 +01:00
beorn7
3610331eeb
Retrieval: Do not buffer the samples if no sample limit configured
...
Also, simplify and streamline the code a bit.
2017-01-07 18:18:54 +01:00
Fabian Reinartz
e631a1260d
retrieval: use separate appender per target
2016-12-30 21:35:35 +01:00
Fabian Reinartz
f8fc1f5bb2
*: migrate ingestion to new batch Appender
2016-12-29 11:03:56 +01:00
Brian Brazil
f421ce0636
Remove label from prometheus_target_skipped_scrapes_total ( #2289 )
...
This avoids it not being intialised, and breaking out by
interval wasn't partiuclarly useful.
Fixes #2269
2016-12-16 18:00:52 +00:00
Brian Brazil
30448286c7
Add sample_limit to scrape config.
...
This imposes a hard limit on the number of samples ingested from the
target. This is counted after metric relabelling, to allow dropping of
problemtic metrics.
This is intended as a very blunt tool to prevent overload due to
misbehaving targets that suddenly jump in sample count (e.g. adding
a label containing email addresses).
Add metric to track how often this happens.
Fixes #2137
2016-12-16 15:10:09 +00:00
Brian Brazil
c8de1484d5
Add scrape_samples_post_metric_relabeling
...
This reports the number of samples post any keep/drop
from metric relabelling.
2016-12-13 17:32:11 +00:00
Brian Brazil
06b9df65ec
Refactor and add unittests to scrape result handling.
2016-12-13 16:49:17 +00:00
Brian Brazil
b5ded43594
Allow buffering of scraped samples before sending them to storage.
2016-12-13 15:01:35 +00:00
Fabian Reinartz
200bbe1bad
config: extract SD and HTTPClient configurations
2016-11-23 18:23:37 +01:00
Fabian Reinartz
47623202c7
retrieval: remove metric namespaces
2016-11-23 09:17:04 +01:00
Fabian Reinartz
d7f4f8b879
discovery: move TargetSet into discovery package
2016-11-23 09:14:44 +01:00