Custom materializations fail in predictable ways. The errors look cryptic at first, but once you’ve seen them a few times, you can diagnose most issues from the error message alone. This note covers the common failure modes, their causes, and a systematic approach to testing new materializations.
Common Errors and Their Causes
”Relation does not exist”
You’re referencing a relation that isn’t in dbt’s metadata cache. This usually means you tried to query or manipulate a relation directly instead of using load_cached_relation().
{# Wrong — this queries the database directly #}{% set exists = adapter.get_relation(database, schema, identifier) %}
{# Right — this checks the cache #}{% set existing = load_cached_relation(this) %}The other common cause: you modified a relation outside the cache’s knowledge (dropped or renamed it in a previous step), and now a later step expects it to still exist under its original name. Track your state carefully through the materialization. If you drop a relation, set the variable to none so downstream logic knows it’s gone.
{% do adapter.drop_relation(existing_relation) %}{% set existing_relation = none %} {# Don't forget this #}”Transaction rolled back” or Changes Not Persisting
You forgot adapter.commit(). This one is subtle because the materialization might appear to succeed — dbt doesn’t always report the missing commit as an error. But the table never appears in the warehouse, or it disappears after the session ends.
Always call adapter.commit() before the return statement. It’s step 6 in the six-step structure for a reason.
{% do adapter.commit() %}{{ return({'relations': [target_relation]}) }}”Relation type mismatch”
The existing relation is a view but you’re trying to work with it as a table (or vice versa). This happens when someone changes a model’s materialization type — say, from view to your custom zero_downtime_table — in a project that’s been running for months.
Handle it explicitly at the start of every materialization:
{% if existing_relation is not none and existing_relation.type != 'table' %} {% do adapter.drop_relation(existing_relation) %} {% set existing_relation = none %}{% endif %}This defensive pattern treats the type mismatch as a first run. The old view gets dropped, and the materialization proceeds as if the table doesn’t exist yet.
Statement Results Not Available
You called statement() without fetch_result=True and then tried to access the result with load_result():
{# Wrong — no fetch_result flag #}{% call statement('validate') %} SELECT COUNT(*) FROM {{ temp_relation }}{% endcall %}{% set count = load_result('validate')['data'][0][0] %} {# Fails #}
{# Right — explicit fetch_result=True #}{% call statement('validate', fetch_result=True) %} SELECT COUNT(*) FROM {{ temp_relation }}{% endcall %}{% set count = load_result('validate')['data'][0][0] %} {# Works #}The error message for this one is usually a NoneType error or a key error, which doesn’t obviously point to the missing flag.
Grants or Permissions Silently Missing
If you apply grants before run_hooks(post_hooks) or before the relation is in its final state, the grants might target the wrong relation or get overwritten. Follow the standard order: main SQL, post-hooks, then grants and docs.
Also check that you’re passing the right arguments to apply_grants(). The signature expects the relation and a grant config dict from config.get('grants').
Testing a New Materialization
Step 1: Create a Minimal Test Model
Start with the simplest possible model that uses your materialization:
-- models/test_zero_downtime.sql{{ config( materialized='zero_downtime_table', min_row_count=1) }}
SELECT 1 AS id, 'test' AS valueStep 2: Run It Twice
dbt run --select test_zero_downtimedbt run --select test_zero_downtimeThe first run tests the “no existing relation” path. The second run tests the “existing relation found” path — the swap logic, the validation against the old table, the backup and cleanup. Both paths need to work. Many materializations only get tested on the first-run path during development and then break in production on the second run.
Step 3: Check the Compiled SQL
Look at target/compiled/ to see the actual SQL your materialization generated. This is invaluable for debugging because you see exactly what statement dbt tried to execute, with all Jinja resolved.
# Find the compiled output for your test modelcat target/compiled/your_project/models/test_zero_downtime.sqlFor materializations that generate multiple statements (the zero-downtime pattern has CREATE TABLE, COUNT, and multiple RENAMEs), check target/run/ as well — it contains the full execution log with each statement’s result.
Step 4: Test the Failure Paths
This is where most people stop too early. Test that your materialization fails correctly:
-- Test: validation should block empty tables{{ config( materialized='zero_downtime_table', min_row_count=100) }}
SELECT 1 AS id WHERE FALSE {# Returns zero rows #}Run this and verify: the error message is clear, the old table (if one exists) is preserved, and no temp or backup tables are left behind. A materialization that fails gracefully is more valuable than one that succeeds — the success path runs once, but the failure path is what saves you at 2 AM.
Step 5: Verify Post-Build Operations
For materializations like the secured table, verify that row access policies and other DDL actually got applied:
-- Check row access policies on BigQuerySELECT *FROM `region-us`.INFORMATION_SCHEMA.OBJECT_PRIVILEGESWHERE object_name = 'your_model_name'Don’t trust that the materialization succeeded just because dbt reported success. Query the warehouse directly to confirm the expected state.
Debugging Techniques
Use Log Statements
The {{ log() }} function is your printf debugger inside materializations:
{{ log("existing_relation: " ~ existing_relation, info=true) }}{{ log("row_count: " ~ row_count, info=true) }}{{ log("config min_row_count: " ~ min_row_count, info=true) }}The info=true flag prints to stdout so you see it in the dbt output. Without it, log messages only appear in the debug log file.
Use exceptions.raise_compiler_error() Strategically
During development, use compiler errors as breakpoints to inspect state at specific points in the materialization:
{{ exceptions.raise_compiler_error( "DEBUG: existing=" ~ existing_relation ~ " temp=" ~ temp_relation) }}This stops execution and prints whatever variables you need to inspect. Remove these before deploying — they’re the equivalent of debugger statements in JavaScript.
Check the dbt Debug Log
Run with dbt --debug run --select test_model for verbose output. The debug log shows every SQL statement sent to the warehouse, the response, and the timing. For materializations that issue multiple statements, this is the only way to see which specific statement failed.
Materialization Development Workflow
A practical workflow for building and iterating on materializations:
- Start with dbt’s built-in
tablematerialization source code as a reference. Read it to understand the patterns dbt itself uses. You can find it in the dbt-core or adapter package source. - Write the materialization with the simplest possible behavior first — just the swap, no validation.
- Test with a trivial model (the
SELECT 1approach above). - Add features one at a time: validation, relative checks, post-build operations.
- Test each addition against both the first-run and subsequent-run paths.
- Test the failure paths explicitly.
- Only then apply it to real models, starting with non-critical ones.
The temptation is to write the full materialization in one shot and test it against a real model. Resist this. Macro debugging is already harder than model debugging because you can’t see the intermediate state. Materializations add another layer of indirection. Incremental development and thorough testing are the only reliable approach.