To cover the cases where stale markers may not be available,
we need to infer the interval and mark series stale based on that.
As we're lacking stale markers this is less accurate, however
it should be good enough for these cases.
We need 4 intervals as if say we had data at t=0 and t=10,
coming via federation. The next data point should be at t=20 however it
could take up to t=30 for it actually to be ingested, t=40 for it to be
scraped via federation and t=50 for it to be ingested.
We then add 10% on to that for slack, as we do elsewhere.
* Use request.Context() instead of a global map of contexts.
* Add some basic opentracing instrumentation on the query path.
* Remove tracehandler endpoint.
This is needed for federating non-instance level metrics, so they don't
end up with the instance label of the prometheus target.
Also sort external labels, so label output order is consistent.
* Fixed int64 overflow for timestamp in v1/api parseDuration and parseTime
This led to unexpected results on wrong query with "(...)&start=148966367200.372&end=1489667272.372"
That query is wrong because of `start > end` but actually internal int64 overflow caused start to be something around MinInt64 (huge negative value) and was passing validation.
BTW: Not sure if negative timestamp makes sense even.. But model.Earliest is actually MinInt64, can someone explain me why?
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
* Added missing trailing periods on comments.
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
* MOved to only `<` and `>`. Removed equal.
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
Expose buildQueryUrl, refactor dispatch to use
buildQueryUrl will allow users to execute queries over the range of an
existing graph. This will be helpful to select data series they wish to
annotate the graph with, for example.
The fuzzy library didn't try to find a "best match", but settled on the
first fuzzy match that exists. This patch includes a modified version of
the fuzzy library, which recursivley tries on the rest of the search
string to find a better match. If found, returns that one.
Another small modification is that if a pattern fully matches, it
skips the lookup entirley and returns the highest score possible for
that match.
For some of the queries, the fuzzy lookup was not filtering properly.
The problem is due to the "replace" beind made on the query itself. It
accidently removes only the first underscore. This patch changes it so
that it removes all of the whitespaces, letting the fuzzy algorithm do
its magic, also fixing this problem.
Originally, the underscore were replaced by a space for this specific
reason, to let the user type a space and have the lookup treat it as the
word break.
Fixes#2380
retreival.Target contains a mutex. It was copied in the Targets()
call. This potentially can wreak a lot of havoc.
It might even have caused the issues reported as #2266 and #2262 .
Right now the /alerts page of Prometheus sorts alerts by severity
(firing, pending, inactive). Once multiple alerts have the same
severity, their order seems to correlate to how they are placed in the
configuration files, but not always. Looking at the code, we make use of
sort.Sort(), which is documented not to provide a stable sort. The
Less() function also only takes the alert state into account.
This change extends the Less() function to provide a lexicographic order
on both the alert state and the name. This means I can finally find the
alerts I'm looking for without using my browser's search feature.
We are writing federation responses streaming. So after
the first byte we wrote, the status header is fixed. We cannot
return an HTTP error for intermediate error but should just abort
and log instead.
Adds also the moment.js library, which is a dependency of it.
Following conventions in the web/ui directory, I am not including the original
sources or LICENSE files.
If an existing request is aborted due to a new request, ignore the completion of the initial request.
Example:
1. Chrome dev tools: enable 5 second network latency
2. Execute query
3. A second later, execute the query again
4. Currently, the spinner will hide, and the stats will immediately display, as if the request had completed. Instead, the spinner and stats should wait until the 2nd execution finishes.
* Add fuzzy search to /graph textarea
We have a few thousands different metrics and looking up some of them
can be quite annoying with the simple string matching.
This patch adds a fuzzy search to the textarea lookup box on the /graph
page. It uses a small neat library from github.com/mattyork/fuzzy.
* Add fuzzy lib to NOTICE and re-build assets
Previously built assets changed the mode.
This extracts Querier as an instantiateable and closeable object
rather than just defining extending methods of the storage interface.
This improves composability and allows abstracting query transactions,
which can be useful for transaction-level caches, consistent data views,
and encapsulating teardown.
If an existing request is aborted due to a new request, ignore the completion of the initial request.
Example:
1. Chrome dev tools: enable 5 second network latency
2. Execute query
3. A second later, execute the query again
4. Currently, the spinner will hide, and the stats will immediately display, as if the request had completed. Instead, the spinner and stats should wait until the 2nd execution finishes.
This is based on https://github.com/prometheus/prometheus/pull/1997.
This adds contexts to the relevant Storage methods and already passes
PromQL's new per-query context into the storage's query methods.
The immediate motivation supporting multi-tenancy in Frankenstein, but
this could also be used by Prometheus's normal local storage to support
cancellations and timeouts at some point.
For Weaveworks' Frankenstein, we need to support multitenancy. In
Frankenstein, we initially solved this without modifying the promql
package at all: we constructed a new promql.Engine for every
query and injected a storage implementation into that engine which would
be primed to only collect data for a given user.
This is problematic to upstream, however. Prometheus assumes that there
is only one engine: the query concurrency gate is part of the engine,
and the engine contains one central cancellable context to shut down all
queries. Also, creating a new engine for every query seems like overkill.
Thus, we want to be able to pass per-query contexts into a single engine.
This change gets rid of the promql.Engine's built-in base context and
allows passing in a per-query context instead. Central cancellation of
all queries is still possible by deriving all passed-in contexts from
one central one, but this is now the responsibility of the caller. The
central query context is now created in main() and passed into the
relevant components (web handler / API, rule manager).
In a next step, the per-query context would have to be passed to the
storage implementation, so that the storage can implement multi-tenancy
or other features based on the contextual information.
This will avoid duplicate MetricFamilies, thereby shrinking the size
of the federation payload and also creating legal text format.
Also, add unit tests for federation. They were also needed for the
previous state of the code, but were missing.
This reverts commit aa43d34a86.
This brings back the /graph changes so that @grandbora can continue to
work on the redirect for backwards compatibility. And other changes
can already take the new /graph parameters into account.
This revert will be reverted once v1.1 is released and has its own
release branch. Since we had already change on top of this, there was
no cleaner way of cutting those changes out.
This commit reverts the following commits:
Revert "Update backend helpers and templates to new url schema"
This reverts commit fc6cdd0611.
Revert "Refactor graph.js"
This reverts commit 445fac56e0.
Revert "Use query parameters in the url"
This reverts commit 3e18d86d8a.
Revert "Point to correct place for GraphLinkForExpression"
This reverts commit 3da825fc76.
Assets are also updated.
There's no corresponding table column for this table header. The
placeholder link for silences was removed in e8800730.
Accordingly, regenerate `web/ui/bindata.go` by running:
make assets format
See discussion in
https://groups.google.com/forum/#!topic/prometheus-developers/bkuGbVlvQ9g
The main idea is that the user of a storage shouldn't have to deal with
fingerprints anymore, and should not need to do an individual preload
call for each metric. The storage interface needs to be made more
high-level to not expose these details.
This also makes it easier to reuse the same storage interface for remote
storages later, as fewer roundtrips are required and the fingerprint
concept doesn't work well across the network.
NOTE: this deliberately gets rid of a small optimization in the old
query Analyzer, where we dedupe instants and ranges for the same series.
This should have a minor impact, as most queries do not have multiple
selectors loading the same series (and at the same offset).
I got feedback from different sources about rules and targets being
too heavy in the status tab if their are lots of them.
This change also allows for more fine-granular locking.
Prometheus is Apache 2 licensed, and most source files have the
appropriate copyright license header, but some were missing it without
apparent reason. Correct that by adding it.
The chunk encoding was hardcoded there because it mostly doesn't
matter what encoding is chosen in that test. Since type 1 is
battle-hardened enough, I'm switching to type 2 here so that we can
catch unexpected problems as a byproduct. My expectation is that the
chunk encoding doesn't matter anyway, as said, but then "unexpected
problems" contains the word "unexpected".
WIP: This needs more tests.
It now gets a from and through value, which it may opportunistically
use to optimize the retrieval. With possible future range indices,
this could be used in a very efficient way. This change merely applies
some easy checks, which should nevertheless solve the use case of
heavy rule evaluations on servers with a lot of series churn.
Idea is the following:
- Only archive series that are at least as old as the headChunkTimeout
(which was already extremely unlikely to happen).
- Then maintain a high watermark for the last archival, i.e. no
archived series has a sample more recent than that watermark.
- Any query that doesn't reach to a time before that watermark doesn't
have to touch the archive index at all. (A production server at
Soundcloud with the aforementioned series churn and heavy rule
evaluations spends 50% of its CPU time in archive index
lookups. Since rule evaluations usually only touch very recent
values, most of those lookup should disappear with this change.)
- Federation with a very broad label matcher will profit from this,
too.
As a byproduct, the un-needed MetricForFingerprint method was removed
from the Storage interface.
This commit simplifies the TargetHealth type and moves the target
status into the target itself. This also removes a race where error
and last scrape time could have been out of sync.
Formalize ZeroSamplePair as return value for non-existing samples.
Change LastSamplePairForFingerprint to return a SamplePair (and not a
pointer to it), which saves allocations in a potentially extremely
frequent call.
It's actually happening in several places (and for flags, we use the
standard Go time.Duration...). This at least reduces all our
home-grown parsing to one place (in model).
This enables metric name autocompletion for every word in an expression,
not just the very first one. It would be great to also support all
language keywords during autocompletion in the future.
This adapts some functionality from the Go standard library for string
literal lexing and unquoting/unescaping.
The following string types are now supported:
Double- or single-quoted strings:
These support all escape sequences that Go supports in double-quoted
string literals. The difference is that Prometheus also has
single-quoted strings (instead of single-quoted runes in Go). Raw
newlines are not allowed.
Backtick-quoted raw strings:
Strings quoted in backticks are treated as raw strings just like in Go
and may contain raw newlines and other special characters directly.
Fixes https://github.com/prometheus/prometheus/issues/1122
Fixes https://github.com/prometheus/prometheus/issues/1121
This is with `golint -min_confidence=0.5`.
I left several lint warnings untouched because they were either
incorrect or I felt it was better not to change them at the moment.
Let's remove the silencing links until we actually have support for that.
A silencing link shouldn't only redirect to Alertmanager, but also open a
silencing dialog for the respective alert name or active alert element.
This got broken in
78047326b4
since it stopped using the DefaultServeMux.
This approach will defer pprof requests to the DefaultServeMux, which
may or may not have pprof enabled (in Prometheus, it gets it included in
main.go). An alternative approach would be to duplicate the four lines in
https://golang.org/src/net/http/pprof/pprof.go#L62. When choosing that
approach though, we would not automatically gain any new endpoints added
by net/http/pprof or other /debug endpoints in the future.
Besides fixing https://github.com/prometheus/prometheus/issues/805 by
making the entire externally reachable server URL configurable, this
adds tests for the "globalURL" template function and makes it easier to
test other such functions in the future.
This breaks the `web.Hostname` flag (and introduces `web.external-url`).
This flag is likely only used by few users, so I hope that's
justifiable.
Fixes https://github.com/prometheus/prometheus/issues/805
Also remove hidden input fields that are not used anymore, because the
query params are now passed as JSON to the AJAX function. This also has
the wonderful side effect that we're no longer sending all the other
non-hidden fields along to the query endpoints anymore.
Changes to the UI:
- "Active Since" timestamps are now human-readable.
- Alerting rules are now pretty-printed better.
- Labels are no longer just strings, but alert bubbles (like we do on
the status page for base labels).
- Alert states and target health states are now capitalized in the
presentation layer rather than at the source.
This commit adds a federation handler on /federate. It accepts `match[]`
query parameters containing vector selectors. Their intersection determines
the in-memory metrics that are returned in the same way as the
/metrics endpoint does (modulo sorting).