* fix(discovery/kubernetes/endpoints): react to changes on Pods because some modifications can occur on them without triggering an update on the related Endpoints (The Pod phase changing from Pending to Running e.g.).
---------
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Co-authored-by: Guillermo Sanchez Gavier <gsanchez@newrelic.com>
SD Managers take over responsibility for SD metrics registration
---------
Signed-off-by: Paulin Todev <paulin.todev@gmail.com>
Signed-off-by: Björn Rabenstein <github@rabenste.in>
Co-authored-by: Björn Rabenstein <github@rabenste.in>
The loop ran indefinitely if the condition isn't met.
Before, each iteration created a new timer channel which was always outpaced by
the other timer channel with smaller duration.
minor detail: There was a memory leak: resources of the ~10 previous timers were
constantly kept. With the fix, we may keep the resources of one timer around for defaultWait
but this isn't worth the changes to make it right.
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
Calling `*armnetwork.InterfacesClient.Get()` doesn't work for Scale Set
VM NIC, because these use a different Resource ID format.
Use `*armnetwork.InterfacesClient.GetVirtualMachineScaleSetNetworkInterface()`
instead. This needs both the scale set name and the instance ID, so
add an `InstanceID` field to the `virtualMachine` struct. `InstanceID`
is empty for a VM that isn't a ScaleSetVM.
Signed-off-by: Daniel Nicholls <daniel.nicholls@resdiary.com>
Calling `*armnetwork.InterfacesClient.Get()` doesn't work for Scale Set
VM NIC, because these use a different Resource ID format.
Use `*armnetwork.InterfacesClient.GetVirtualMachineScaleSetNetworkInterface()`
instead. This needs both the scale set name and the instance ID, so
add an `InstanceID` field to the `virtualMachine` struct. `InstanceID`
is empty for a VM that isn't a ScaleSetVM.
Signed-off-by: Daniel Nicholls <daniel.nicholls@resdiary.com>
Version 2 introduced a breaking change in the `id` field of all
resources. They were changed from `int` to `int64` to make sure that all
future numerical IDs are supported on all architectures.
You can learn more about this
[here](https://docs.hetzner.cloud/#deprecation-notices-%E2%9A%A0%EF%B8%8F)
Signed-off-by: Julian Tölle <julian.toelle@hetzner-cloud.de>
InstanceSpec struct members are untyped integers, so they can overflow
on 32-bit arch when bit-shifted left.
Signed-off-by: Daniel Swarbrick <daniel.swarbrick@gmail.com>
Adds a new label to include the ID of the image that an instance is
using. This can be used for example to filter a job to only include
instances using a certain image as that image includes some exporter.
Sometimes the image information isn't available, such as when the image
is private and the user doesn't have the roles required to see it. In
those cases we just don't set the label, as the rest of the information
from the discovery provider can still be used.
Signed-off-by: Taavi Väänänen <hi@taavi.wtf>
Previously, `d.paths` were normalized to backslashes on Windows, even when if
the config file used Unix style. The end result meant always watching `./`, so
changes for this config were always ignored:
scrape_configs:
- job_name: 'envmsc1'
file_sd_configs:
- files:
- 'targets/envmsc1.d/*.yml'
- 'targets/envmsc1.d/*.yaml'
Additionally, unlike the other platforms, no warning was emitted on startup
about not being able to install the watch if the directory didn't exist. Now it
is logged.
Signed-off-by: Matt Harbison <mharbison@atto.com>
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.
The exceptions that I have found in our codebase are just these two:
* The `if else` is followed by an additional statement before the next
condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
used. In this case, using `switch` would require tagging the `for`
loop, which probably tips the balance.
Why are `switch` statements more readable?
For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.
I'm sure the aforemention wise coders can list even more reasons.
In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.
Signed-off-by: beorn7 <beorn@grafana.com>
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.
There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).
I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.
I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.
Signed-off-by: beorn7 <beorn@grafana.com>
This makes it easier to connect a log message with the config it relates
to.
Each SD config has a name, either the scrape job name or something like
"config-0" for Alertmanager config.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
While originally the resync period also forced refreshing from Kubernetes API server, this has been removed for some years now because watching the API server got more stable [1]. Today, this just results in all entities being sent to the service discovery again, which is valid from a general Prometheus perspective, but results in unnecessary CPU load and also breaks service discovery metrics. In especially, this makes monitoring "do we actually observe changes from Kubernetes API server" impossible (receiving constant updates from Kubernetes service discovery is a pretty valid assumption, for example nodes get frequent status updates, ...).
Signed-off-by: Jens Erat <jens.erat@mercedes-benz.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
* Add VM size label to azure service discovery (#11575)
Signed-off-by: davidifr <davidfr.mail@gmail.com>
Signed-off-by: davidifr <davidfr.mail@gmail.com>
A new API is available for AddEventHandlers, to get errors but also be
able to cancel handlers.
Doing the easy thing for the release, which is just to log errors.
We could see how to improve this in the future to handle the errors
properly and cancel the handlers.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>