The implementation of `sendAll` means that we observe latencies even for
notifications that would be considered dropped due to errors when
sending them.
Similarly, we count alerts as 'sent' even if an error occurred when
trying to send them (meaning they are potentially not sent at all).
336c7870ea/notifier/notifier.go (L340-L347)
* refactor: move targetGroup struct and CheckOverflow() to their own package
* refactor: move auth and security related structs to a utility package, fix import error in utility package
* refactor: Azure SD, remove SD struct from config
* refactor: DNS SD, remove SD struct from config into dns package
* refactor: ec2 SD, move SD struct from config into the ec2 package
* refactor: file SD, move SD struct from config to file discovery package
* refactor: gce, move SD struct from config to gce discovery package
* refactor: move HTTPClientConfig and URL into util/config, fix import error in httputil
* refactor: consul, move SD struct from config into consul discovery package
* refactor: marathon, move SD struct from config into marathon discovery package
* refactor: triton, move SD struct from config to triton discovery package, fix test
* refactor: zookeeper, move SD structs from config to zookeeper discovery package
* refactor: openstack, remove SD struct from config, move into openstack discovery package
* refactor: kubernetes, move SD struct from config into kubernetes discovery package
* refactor: notifier, use targetgroup package instead of config
* refactor: tests for file, marathon, triton SD - use targetgroup package instead of config.TargetGroup
* refactor: retrieval, use targetgroup package instead of config.TargetGroup
* refactor: storage, use config util package
* refactor: discovery manager, use targetgroup package instead of config.TargetGroup
* refactor: use HTTPClient and TLS config from configUtil instead of config
* refactor: tests, use targetgroup package instead of config.TargetGroup
* refactor: fix tagetgroup.Group pointers that were removed by mistake
* refactor: openstack, kubernetes: drop prefixes
* refactor: remove import aliases forced due to vscode bug
* refactor: move main SD struct out of config into discovery/config
* refactor: rename configUtil to config_util
* refactor: rename yamlUtil to yaml_config
* refactor: kubernetes, remove prefixes
* refactor: move the TargetGroup package to discovery/
* refactor: fix order of imports
Currently when sending alerts via the go routine within `sendAll`, the value
of `ams` is not passed to the routine, causing it to use the updated value of `ams`.
Example config:
```
alerting:
alertmanagers:
- basic_auth:
username: 'prometheus'
password: 'test123'
static_configs:
- targets:
- localhost:9094
- static_configs:
- targets:
- localhost:9095
```
In this example alerts sent to `localhost:9094` fail with:
```
level=error ts=2017-10-12T10:03:53.456819948Z caller=notifier.go:445
component=notifier alertmanager=http://localhost:9094/api/v1/alerts
count=1 msg="Error sending alert" err="bad response status 401
Unauthorized"
```
If you change the order to be:
```
alerting:
alertmanagers:
- static_configs:
- targets:
- localhost:9095
- basic_auth:
username: 'prometheus'
password: 'test123'
static_configs:
- targets:
- localhost:9094
```
It works as expected.
This commit changes the behavour so `ams` is passed to the go routine so
`n.sendOne` uses the appropriate `http.Client` details.
We need to be able to modify the HTTP POST in Weave Cortex to add
multitenancy information to a notification. Since we only really need a
special header in the end, the other option would be to just allow
passing in headers to the notifier. But swapping out the whole Doer is
more general and allows others to swap out the network-talky bits of the
notifier for their own use. Doing this via contexts here wouldn't work
well, due to the decoupled flow of data in the notifier.
There was no existing interface containing the ctxhttp.Post() or
ctxhttp.Do() methods, so I settled on just using Do() as a swappable
function directly (and with a more minimal signature than Post).
The current description does not accurately describe when the metric is incremented.
Aside from Alertmanger missing from the configuration, `prometheus_notifications_dropped_total` is incremented when errors occur while sending alert notifications to Alertmanager, or because the notifications queue is full, or because the number of notifications to be sent exceeds the queue capacity.
I think calling these cases 'errors' in a generic sense is more useful than the current description.