This reverts commit e3bc6fc9dc, reversing
changes made to 1cf9e5840a.
Conflicts:
retrieval/target_provider.go
Change-Id: Icb6e98fb30419e9e2fe9b686c243702ced372014
This includes required refactorings to enable replacing the http client (for
testing) and moving the NotificationReq type definitions to the "notifications"
package, so that this package doesn't need to depend on "rules" anymore and
that it can instead use a representation of the required data which only
includes the necessary fields.
This commit forces the extraction framework to read the entire response payload
into a buffer before attempting to decode it, for the underlying Protocol Buffer
message readers do not block on partial messages.
If the metrics exported by a process already contain any of a target's
base labels (such as "job" or "instance", but also any manually assigned
target-group label), don't overwrite that label, but instead add a new
label consisting of the original label name prepended with "exporter_".
This is to accomodate intermediate exporter jobs, which might indicate
e.g. the jobs and instances for which they are exporting data.
This commit updates the documentation, Makefiles, formatting, and
code semantics to support the 1.1. runtime, which includes ...
1. ``make advice``,
2. ``make format``, and
3. ``go fix`` on various targets.
This commit employs explicit memory freeing for the in-memory storage
arenas. Secondarily, we take advantage of smaller channel buffer sizes
in the test.
Instead of externally handling timeouts when scraping a target, we set
timeouts on the HTTP connection. This ensures that we don't leak
goroutines on timeouts.
[fixes#181]
Primary changes:
* Strictly typed unmarshalling of metric values
* Schema types are contained by the processor (no "type entity002")
Minor changes:
* Added ProcessorFunc type for expressing processors as simple
functions.
* Added non-destructive `Merge` method to `model.LabelSet`
ProcessorForRequestHeader now looks first for a header like
`Content-Type: application/json; schema="prometheus/telemetry";
version="0.0.1"` before falling back to checking
`X-Prometheus-API-Version`.
We're currently timestamping samples with the time at the end of a scrape
iteration. It makes more sense to use a timestamp from the beginning of the
scrape for two reasons:
a) this time is more relevant to the scraped values than the time at the
end of the HTTP-GET + JSON decoding work.
b) it reduces sample timestamp jitter if we measure at the beginning, and
not at the completion of a scrape.