node_exporter currently triggers autofs to mount the underlying
filesystem on every scrape. This is undesirable. Better ignore autofs.
The underlying filesystem that autofs mounts will be monitored though,
when the (real) filesystem is mounted.
They get printed all the time, as there are some tokens in the /proc
file that we simply don't support. It's better to keep these as
debugging messages, which may come in useful if new tags start to
appear.
Collect metrics from the StorCLI utility on the health of MegaRAID
hardware RAID controllers and write them to stdout so that they can be
used by the textfile collector.
We parse the JSON output that StorCLI provides.
Script must be run as root or with appropriate capabilities for storcli
to access the RAID card.
Designed to run under Python 2.7, using the system Python provided with
many Linux distributions.
The metrics look like this:
mbostock@host:~$ sudo ./storcli.py
megaraid_status_code 0
megaraid_controllers_count 1
megaraid_emergency_hot_spare{controller="0"} 1
megaraid_scheduled_patrol_read{controller="0"} 1
megaraid_virtual_drives{controller="0"} 1
megaraid_drive_groups{controller="0"} 1
megaraid_virtual_drives_optimal{controller="0"} 1
megaraid_degraded{controller="0"} 0
megaraid_battery_backup_healthy{controller="0"} 1
megaraid_ports{controller="0"} 8
megaraid_failed{controller="0"} 0
megaraid_drive_groups_optimal{controller="0"} 1
megaraid_healthy{controller="0"} 1
megaraid_physical_drives{controller="0"} 24
megaraid_controller_info{controller="0", model="AVAGOMegaRAIDSASPCIExpressROMB"} 1
mbostock@host:~$
- Use the right number of printf() arguments. Use %q where it makes sense.
- Use "DRBD" instead of "Drbd", per Go's style guide.
- Add _total suffixes to counter metrics.
- Mention the unit (bytes) in documentation strings once more.
This collector exposes most of the useful information that can be found
in /proc/drbd. Sizes are normalised to be in bytes, as /proc/drbd uses
kibibytes.
This change adds a new collector called "nfs" that parses the contents
of /proc/net/rpc/nfs and turns it into metrics. It can be used to
inspect the number of operations per type, but also to keep an eye on an
extraneous number of retransmissions, which may indicate connectivity
issues.
I've picked the name "nfs", as most operating systems use "nfs" for the
client component and "nfsd" as the server component. If we want to add
stats for the NFS server as well, we'd better call such a collector
"nfsd".