Speaking of Grafana
Now that the situation with Grafana has normalized somewhat, let’s get a quick look at how to get node_exporter setup in NixOS. If you don’t know what that is, worry not. We’ll get to it right after the jump!
So, node_exporter
will get you some good metrics from the computers where it is running. Things like CPU usage, disk usage, network stats, etc. It’s pretty cool. Let’s say we have our Prometheus + Grafana combo running somewhere in our home dojo, and we want to have node_exporter
running in our NixOS box. How hard can it be? Turns out, not very hard! If you know what you’re doing, I mean. Given that I don’t know what I’m doing, I search for that kind of stuff and hope for the best. Luckily, the best is right there. And the best is Xe Iaso. So, let's, ahem, borrow from Xe's, and get this thing running. In my case, I just created a module called node_exporter.nix
, and imported it from configuration.nix
. Looks like this:
{
services.prometheus = {
exporters = {
node = {
enable = true;
enabledCollectors = [ "systemd" "processes"];
port = 9100;
};
};
};
}
That's it. The only difference is the additional "processes"
collector, and the port. Other than that, not super different from the original.
Now, we just need to add our machine to the prometheus.yml
, and create the dashboard. Realistically, you just need to add the IP:port to the targets
, thusly:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["my-nix-box.local:9100"]
From there, just import the pre-cooked dashboard into Grafana, and voilà:
Beautiful work. Pat yourself on the back!