Use Alert Routing to Make Sense of your Alerts
Alert routes allow you to create conditional rules for inbound alerts to FireHydrant from both alerting providers like PagerDuty and Opsgenie as well as monitoring providers like Honeycomb and Datadog.
These rules allow you to automatically open incidents, send notifications to Slack channels for someone to manually triage and open an incident from there, log alerts in FireHydrant, or simply ignore an alert. You can use powerful conditional statements that take advantage of the incoming data from your alerting provider.
FireHydrant today provides alert routing capabilities for the following providers:
- Alerting
- Monitoring
Prerequisites
Aside from initially setting up the integration (see links above), you'll also want to configure a default alerting channel in Slack. Every alerting and monitoring integration comes with a default rule to notify the default Slack channel you've configured for alerts.
This Slack channel is configured in your Slack integration settings (Settings > Integrations), as seen below:

Overview
To get started with alert routes, navigate to the integration page in FireHydrant for your alerting/monitoring provider. Alongside the configuration of your alerting provider, you’ll find a tab for Alert Routes as well as Alert Logs.

In the Alert Routes tab, you’ll find a default route that sends all your alerts to the channel that you set in your Slack integration settings. Default routes are the fallback rule that will be executed if none of the other routes have been run.

Here, you are able to add routes and actions for your alerts. For example, you could hypothetically implement the following scenario for PagerDuty:
- Incoming P1 alerts automatically create an incident
- If an incoming alert's title contains "test", ignore it entirely
- Otherwise default to notifying the configured alert channel


Route Conditions
Every Alert Route has a Condition and an Action.
A condition allows you to set the logic that controls whether or not an alert route is processed.
Note: Each route executes exclusively. So once the conditions on a route have been matched, FireHydrant will stop evaluating the other rules, and we will perform the action associated with the matched route.
Conditions are configured from any of the data that has come through from the alerting provider. There are some common fields from all providers such as Summary, Priority, and Impacted Infrastructure, but any additional fields from your provider can also be pulled in from the webhook’s request body. You can find the available parameters from the webhook body in each provider's individual documentation page.
Conditionals can also be chained together with either “OR” or “AND” conditions for each route.

Route Actions
For each route, you can choose between four actions:
- Automatically declare an Incident in FireHydrant
- Send an alert to a specific Slack Channel
- Create a log for this integration
- Ignore the alert
Automatically Declare an Incident

When automatically opening an incident, you can specify the various fields on the incident using hard-coded content (and in some cases Markdown) or with Liquid templating, which references the data available from the alert.
Alerting a Slack Channel
When sending alerts to a Slack Channel, you can create a custom template for the Alert title that is sent, and we’ll include a link to the original alert in the message.
You can specify a channel by hard-coded name (e.g. #backend-team
), with the Liquid variable referencing the Slack integration ({{ slack_connection.alert_channel }}
) or a parameter from the incoming alert if relevant (#{{ request.body.team_slack_channel }}
). Here's an example alert being sent to a Slack channel. See the bottom section below on Liquid Templating.

From here, a user can manually look at the notification in the Slack channel and decide to open the incident, which automatically associates said alert with the incident they create.
Logging an Alert
When logging an alert to FireHydrant, you can select the level of the log you’d like to provider (Info, Warning, Error, etc.) and you can also specify the message that is included in the log. You can access any logged alerts in the Alert Logs tab of your integration.
To see the logs from any alerts that have been routed to a log, navigate to the Alert Logs tab in your integration page in FireHydrant.

Here, you browse a list of alerts which is initially filtered to “Warn” level and above. Select another log level to see all alerts at that log level and above. To view any further details about the log, click “View Context”. You may find additional details including error messages and more.

Ignore the alert
Exactly as the name implies, this not only takes no action but also does not bother logging that the alert came in.
We generally recommend defaulting to Logging an alert as described above for debugging purposes but if there are inbound alerts you know are useless and noisy (e.g. such as tests), you can configure this action to reduce the logging volume.
Using Liquid Templating
Any parameter or data from the incoming webhook can be used in any of the text fields in Alert Routing using the request.body
variable. For example:
Given the following example incoming alert body from PagerDuty:
{
"event": {
"id": "5ac64822-4adc-4fda-ade0-410becf0de4f",
"event_type": "incident.priority_updated",
"resource_type": "incident",
"occurred_at": "2020-10-02T18:45:22.169Z",
"agent": {
"html_url": "https://acme.pagerduty.com/users/PLH1HKV",
"id": "PLH1HKV",
"self": "https://api.pagerduty.com/users/PLH1HKV",
"summary": "Tenex Engineer",
"type": "user_reference"
},
"client": {
"name": "PagerDuty"
}
}
}
You would be able to make use of individual parameters like so:
**Occurred at**: {{ request.body.event.occurred_at }}
**Opened by**: {{ request.body.event.agent.summary }}
This flexibility allows you to pass in virtually any data needed from the requesting source and making use of it in Alert Routing.