ServicesAboutNotesContact Get in touch →
EN FR
Note

Dagster Asset Checks from dbt Tests

How Dagster automatically converts dbt tests into asset checks since version 1.7 -- severity mapping, health badges, and what this means for unified data quality monitoring.

Planted
dbtdata qualitytestingautomation

Since Dagster 1.7, dbt tests are automatically pulled in as asset checks. Every not_null, unique, accepted_values, and custom data test appears in the Dagster UI as a quality check attached to the relevant asset. No extra configuration needed beyond the standard dagster-dbt mapping.

This is significant because it means your existing dbt test suite — the one you already maintain in your schema.yml files — becomes your Dagster quality monitoring layer automatically. You don’t need to define quality checks in two places.

How It Works in Practice

When you materialize a dbt model through Dagster, its tests run as part of the dbt build command. In the Dagster UI, each asset shows a health badge:

  • Green means the asset was materialized and all checks passed.
  • Red means a check failed. You can click through to see exactly which test failed and why.

The feedback loop is immediate: materialize, test, report — all in one operation, visible in one UI.

Schema Tests vs. Data Tests

Schema tests and data tests behave slightly differently in how they attach to assets.

Schema tests — like not_null on a column or unique on a primary key — attach to the specific model they test. This is straightforward: a not_null test on mrt__finance__orders.order_id becomes a check on the mrt__finance__orders asset.

Generic data tests that reference multiple models attach to the primary model. If you have a relationships test validating that mrt__finance__orders.customer_id references mrt__core__customers.customer_id, the check attaches to mrt__finance__orders (the model where the test is declared).

For teams using custom generic tests or package-based tests from dbt-utils and dbt-expectations, the same mapping applies. Any test that dbt recognizes in manifest.json becomes a Dagster asset check.

Severity Mapping

You can configure check severity through dbt’s existing severity config, and it maps directly to Dagster’s check behavior:

  • A test with severity: error maps to a blocking check in Dagster. If it fails, downstream materializations are blocked. This is the default behavior.
  • A test with severity: warn maps to a non-blocking check. The warning appears in the UI, but downstream assets can still materialize.
models:
- name: mrt__marketing__daily_spend
columns:
- name: spend_amount
data_tests:
- not_null:
severity: error # Blocks downstream if NULL spend found
- dbt_expectations.expect_column_values_to_be_between:
min_value: 0
max_value: 1000000
severity: warn # Warns but doesn't block

This maps to Dagster’s blocking vs non-blocking check distinction, so existing dbt test configuration carries over without changes.

What This Replaces

For teams already using dbt testing strategies, the Dagster integration means one UI for both execution and quality. No more checking dbt Cloud for test results and a separate tool for pipeline health. No more correlating timestamps across different systems to figure out whether a test failure happened before or after a particular materialization.

The asset check model also changes how you think about test failures. In standalone dbt, a test failure is logged in the CLI output and maybe reported to a Slack channel. In Dagster, a test failure is visible on the asset itself, in the context of its lineage. You can see at a glance which downstream assets are affected by a failing check, and whether the failure is blocking or just a warning.

The Layered Quality Stack

Asset checks from dbt tests form one layer of a broader quality monitoring approach:

LayerMechanismWhat It Catches
Schema validationdbt generic tests (unique, not_null, relationships)Structural integrity violations
Business rulesdbt-expectations, singular testsDomain-specific violations
Logic correctnessdbt unit testsTransformation bugs
Anomaly detectionElementaryUnknown unknowns
FreshnessDagster freshness policiesStale data
Orchestration healthDagster asset checks (from all above)Unified view of all the above

Dagster sits at the top of this stack as the unified view. It doesn’t replace Elementary’s anomaly detection or dbt’s test framework — it surfaces them all in one place with one execution model.

Practical Considerations

Test Execution Overhead

Because tests run as part of dbt build, they add to total materialization time. For models with many tests (10+ per model), this can meaningfully extend the build duration. The standard approach: keep error-severity tests lean (primary keys, not-null on critical columns) and use warn-severity for expensive statistical tests that don’t need to block downstream processing.

Unit Tests Are Different

dbt unit tests (dbt 1.8+) validate transformation logic with mocked inputs. They don’t map to asset checks because they don’t run against real data — they run against mocked fixtures. Exclude unit tests from production runs (dbt build --exclude-resource-type unit_test). They belong in CI and local development, not in Dagster’s production execution.

Custom Asset Checks Beyond dbt

Dagster also supports defining asset checks in Python, independent of dbt. This is useful for quality checks that span multiple dbt models or involve logic that’s awkward to express in SQL. For example, checking that the row count in a mart model is within an expected range of its upstream intermediate model, or validating that a Python-generated asset and a dbt model agree on aggregate values.

from dagster import asset_check, AssetCheckResult
@asset_check(asset=my_dbt_assets)
def check_row_count_reasonable(context):
# Custom validation logic in Python
row_count = get_row_count("mrt__finance__orders")
return AssetCheckResult(
passed=row_count > 0,
metadata={"row_count": row_count},
)

These Python-defined checks appear alongside dbt-originated checks in the same UI, giving you a single quality dashboard regardless of where the check logic lives.

For teams migrating from dbt Cloud to Dagster, the existing test suite — unique and not_null on primary keys, relationships tests on foreign keys, dbt-expectations range checks — transfers without modification. Test results that previously lived in dbt logs, Elementary reports, and CI output appear as first-class metadata on the assets themselves. The only change is where the results are viewed.