Add metrics that count how many running processes are linking to deleted
libraries on each machine. Deleted libraries are usually outdated
libraries, and outdated libraries may have known security
vulnerabilities.
The rationale behind storing these as metrics is allow the rollout of
security fixes to be tracked across a fleet of machines, ensuring that
all affected processes are restarted (e.g. via a reboot).
I'm parsing the output from `/proc/*/maps` because it's using `lsof -d
DEL` can be too slow, particularly if you have sockets that bind to
thousands of IP addresses.
The metric labels include the library path and the base filename, which
allows us to pinpoint the exact path of the deleted library but also
allows us to aggregate on the library name (or approximations of it)
even if library locations differ between operating system versions.
The metrics output and the CPU time consumed is as follows:
user@host:~$ time sudo python processes.py
# HELP node_processes_linking_deleted_libraries Count of running processes that link a deleted library
# TYPE node_processes_linking_deleted_libraries gauge
node_processes_linking_deleted_libraries{library_path="locale-archive", library_name="/usr/lib/locale"} 3
node_processes_linking_deleted_libraries{library_path="libevent-2.0.so.5.1.9", library_name="/usr/lib/x86_64-linux-gnu"} 4
real 0m0.071s
user 0m0.030s
sys 0m0.041s
Including the library filename and path will result in reasonably high
metrics cardinality, however I think the benefits when an urgent
security patch is being deployed outweigh concerns around cardinality.
This script assumes that library files do not contain spaces in their
path.
Signed-off-by: Matt Bostock <mbostock@cloudflare.com>