Commit graph

21 commits

Author SHA1 Message Date
Julius Volz fb44580110 Cleanup/fix program termination sequence.
Change-Id: I2bc58a2583fb079c9ef383cfc7a5e0fbe613f1cd
2013-12-11 15:40:32 +01:00
Julius Volz d69b85e6c9 Add global label support via Ingesters. 2013-08-13 16:54:15 +02:00
Julius Volz 0003027dce Add needed trailing spaces in logs. 2013-08-12 18:22:48 +02:00
Julius Volz aa5d251f8d Use github.com/golang/glog for all logging. 2013-08-12 17:54:36 +02:00
Matt T. Proud 06b4a40661 Represent targets in a tabular interface.
This commit represents a target group's endpoints in a tabular fashion for better differentiation
of their state in a concise manner.
2013-07-15 15:12:01 +02:00
Julius Volz 9a48f57b66 Continue scraping old targets on SD fail.
When we have trouble resolving the targets for a job via service
discovery, we shouldn't just stop scraping the targets we currently
have.
2013-07-12 22:38:42 +02:00
Matt T. Proud 30b1cf80b5 WIP - Snapshot of Moving to Client Model. 2013-06-25 15:52:42 +02:00
Julius Volz d9b4f98b44 Integrate DNS-SD support for discovering job targets. 2013-06-12 18:11:48 +02:00
Matt T. Proud d4db3cf00b Code Review: Last replacement wins. 2013-06-05 16:29:05 +02:00
Matt T. Proud 9cde48754b Fix race conditions in TargetPool.
The race condition detector found a few anomalies whereby a
TargetPool could be read during a mutation.  This has been fixed.
2013-06-05 14:44:20 +02:00
Matt T. Proud 8f4c7ece92 Destroy naked returns in half of corpus.
The use of naked return values is frowned upon.  This is the first
of two bulk updates to remove them.
2013-05-16 10:53:25 +03:00
Julius Volz f1fc7d717a Allow replacing job targets via HTTP API.
This roughly comprises the following changes:

- index target pools by job instead of scrape interval
- make targets within a pool exchangable while preserving existing
  health state for targets
- allow exchanging targets via HTTP API (PUT)
- show target lists in /status (experimental, for own debug use)
2013-02-28 21:33:29 +01:00
Julius Volz 3537edee9f Fix targetpool iteration deadlock. 2013-02-21 19:48:29 +01:00
Matt T. Proud e01b6cdb44 Duration statistics for each target pool.
We have an open question of how long does it take for each target
pool to have the state retrieved from all participating elements.
This commit starts by providing insight into this.
2013-01-28 16:36:28 +01:00
Matt T. Proud ea54751431 Update import paths to new location.
This repository moved from matttproud/prometheus to
prometheus/prometheus, and all import paths need to be updated.
2013-01-27 18:49:45 +01:00
Matt T. Proud f2ded515b7 Support versioned telemetry providers.
client_golang was updated to support full label-oriented telemetry,
which introduced interface incompatibilities with the previous
version of Prometheus.  To alleviate this, a general fetching and
processing dispatching system has been created, which discriminates
and processes according to the version of input.
2013-01-27 17:45:50 +01:00
Matt T. Proud 190e4e3fa3 `TargetManager and TargetPool` ass pointers. 2013-01-15 17:06:17 +01:00
Matt T. Proud 9752f1e61d Refactor Target as interface for testability.
Future tests around the ``TargetPool`` and ``TargetManager`` and
friends will be a lot easier when the concrete behaviors of
``Target`` can be extracted out.  Plus, each ``Target``, I suspect,
will have its own resolution and query strategy.
2013-01-15 16:19:51 +01:00
Matt T. Proud efe61c18fa Refactor target scheduling to separate facility.
``Target`` will be refactored down the road to support various
nuanced endpoint types.  Thusly incorporating the scheduling
behavior within it will be problematic.  To that end, the scheduling
behavior has been moved into a separate assistance type to improve
conciseness and testability.

``make format`` was also run.
2013-01-13 10:43:37 +01:00
Matt T. Proud 2922def8d0 Use the `TargetManager` for targets. 2013-01-04 17:17:23 +01:00
Matt T. Proud 7a9777b4b5 Create `TargetPool` priority queue.
``TargetPool`` is a pool of targets pending scraping.  For now, it
uses the ``heap.Interface`` from ``container/heap`` to provide a
priority queue for the system to scrape from the next target.

It is my supposition that we'll use a model whereby we create a
``TargetPool`` for each scrape interval, into which ``Target``
instances are registered.
2013-01-04 12:17:31 +01:00