When a Prometheus alerting rule fires, the Prometheus server sends a notification to the Alertmanager, which is then responsible for processing that alert further, i.e. by routing it to an appropriate alerting channel (e-mail, Slack, ...). In order to test the Alertmanager configuration, it is useful to trigger alerts directly via Alertmanager's API. That API is not documented on the Prometheus website, but it's easy enough to figure out how it works.
To see what's going on, I created a simple Prometheus alerting rule that checks for a server that isn't there, triggering an alert:
groups: - name: example rules: - alert: InstanceDown expr: up{job="node"} == 0 labels: severity: critical annotations: summary: Instance is down
Instead of an Alertmanager instance, we run netcat to see the API request:
$ nc -l 9093 POST /api/v1/alerts HTTP/1.1 Host: localhost:9093 User-Agent: Prometheus/2.21.0 Content-Length: 330 Content-Type: application/json [{"labels":{"alertname":"InstanceDown","instance":"localhost:8080","job":"node","severity":"critical"},"annotations":{"summary":"Instance is down"},"startsAt":"2020-09-13T12:27:02.153716049Z","endsAt":"2020-09-13T12:31:02.153716049Z","generatorURL":"http://sia:9090/graph?g0.expr=up%7Bjob%3D%22node%22%7D+%3D%3D+0\u0026g0.tab=1"}] $
That seems simple enough. Let's format it and put it into a shell script:
#! /usr/bin/env sh URL="http://localhost:9093/api/v1/alerts" curl -si -X POST -H "Content-Type: application/json" "$URL" -d ' [ { "labels": { "alertname": "InstanceDown", "instance": "localhost:8080", "job": "node", "severity": "critical" }, "annotations": { "summary": "Instance is down" }, "generatorURL": "http://localhost:9090/graph" } ] '
I have removed the startsAt and endsAt attributes, which are optional. You can see the annotations and labels from the alerting rule plus some additional labels that Prometheus created for us.
Now you can use the script to send the alert to Alertmanager directly.