ServicesAboutNotesContact Get in touch →
EN FR
Note

Elementary alert routing with filters

How to run multiple edr monitor commands with different filters to route alerts by tag, owner, status, or resource type to different channels and incident management tools.

Planted
dbtelementarydata qualityautomation

Channel routing via model metadata (setting channel: finance-data-alerts on a model or directory) handles the common case: route by team ownership. But sometimes you want routing based on failure characteristics — severity, tags, or resource type — rather than who owns the model. That’s where edr monitor’s --filters flag comes in.

The core pattern: run edr monitor multiple times in the same pipeline step with different filter arguments and different destination channels. Each invocation queries the same Elementary tables but sends a different subset of alerts to a different destination.

Filtering by tag

Tags on models are the most flexible routing dimension because you control what tags mean. A critical tag routes to a high-urgency channel; a finance tag routes to the finance team’s channel:

Terminal window
# Critical alerts to the urgent channel
edr monitor --filters tags:critical --slack-channel-name critical-alerts
# Finance team alerts
edr monitor --filters tags:finance --slack-channel-name finance-data

This works when you’ve applied tags consistently in your model YAML:

models:
- name: mrt__finance__revenue
meta:
tags: ["critical", "finance"]

A model tagged with both critical and finance will appear in both runs — which is usually what you want for a model that’s both high-stakes and team-owned.

Filtering by owner

Route alerts to the team channel for whoever owns the failing model:

Terminal window
edr monitor --filters owners:@finance-team --slack-channel-name finance-data

The owners filter matches against the owner field in model metadata. This is complementary to tag-based routing — you might route by owner to the team channel for routine attention and by tag to a critical channel for immediate response.

Filtering by status

Not every failure is equal. fail means the test errored; warn means it exceeded the warning threshold but not the error threshold. Different urgencies warrant different channels:

Terminal window
# Only hard failures -- requires immediate action
edr monitor --filters statuses:fail --slack-channel-name failures-only
# Warnings for review -- someone should look at these, not panic
edr monitor --filters statuses:warn --slack-channel-name warnings-review

This pairs well with how you configure test severity in dbt. Tests that are warn severity signal “investigate soon.” Tests that are error severity signal “something is broken right now.” Sending them to different channels makes that distinction visible without anyone having to read the alert carefully to know which it is.

Combining filters

Multiple --filters flags combine as AND conditions, but comma-separated values within a filter are OR:

Terminal window
edr monitor \
--filters resource_types:model \
--filters tags:finance,marketing \
--slack-channel-name business-critical

This sends alerts for models (not sources or tests) tagged with either finance OR marketing. Both conditions must be true (AND), but either tag value satisfies the tags filter (OR). Useful when you want business-critical coverage without listing every model explicitly.

The multi-step pipeline pattern

Putting this together in GitHub Actions:

- name: Alert on critical failures
run: edr monitor --filters tags:critical --slack-channel-name critical-alerts
- name: Alert finance team
run: edr monitor --filters tags:finance --slack-channel-name finance-data
- name: Alert marketing team
run: edr monitor --filters tags:marketing --slack-channel-name marketing-data

Each step is independent. A critical finance failure shows up in both #critical-alerts and #finance-data. A non-critical marketing failure only shows up in #marketing-data. The filter logic is in the pipeline, not in the warehouse tables.

One thing to keep in mind: each edr monitor invocation queries your warehouse. If you have 10 separate routing rules, that’s 10 queries. On BigQuery on-demand pricing, this adds up at scale — not dramatically, but it’s worth consolidating similar filters where possible.

PagerDuty and incident management (Elementary Cloud)

The OSS edr monitor command routes to Slack and Teams. Elementary Cloud extends this to PagerDuty, Opsgenie, Jira, Linear, ServiceNow, email, and custom webhooks.

The Cloud approach is different from the multi-filter CLI pattern — instead of running multiple monitor commands, you configure alert rules in the Elementary Cloud UI that map test failures to incidents based on status, tags, and resource types.

An example rule: “if status = fail AND tag = critical, create a P1 incident in PagerDuty.” Cloud also handles automatic incident grouping (new failures related to an open incident get grouped rather than creating separate tickets) and automatic resolution when successful runs come through.

For OSS users who need PagerDuty integration without upgrading to Cloud, the bridge approach works: send Elementary alerts to a Slack channel that triggers PagerDuty via Slack’s native PagerDuty integration. Less elegant than native Cloud integration, but achieves the same on-call notification without the licensing cost.

The triage severity model that determines which alert goes where is covered in Data team on-call strategies.