A custom materialization is a macro with a specific structure. dbt’s built-in table, view, incremental, and ephemeral materializations all follow the same six-step skeleton.
The Six-Step Structure
Every materialization — built-in or custom — follows this template:
{%- materialization my_custom_mat, adapter='default' -%}
{# 1. SETUP - Prepare relations #} {% set target_relation = this.incorporate(type='table') %} {% set existing_relation = load_cached_relation(this) %}
{# 2. PRE-HOOKS #} {{ run_hooks(pre_hooks) }}
{# 3. MAIN SQL - Build the relation #} {% call statement('main') %} {{ sql }} {% endcall %}
{# 4. POST-HOOKS #} {{ run_hooks(post_hooks) }}
{# 5. CLEANUP - Grants, docs, permissions #} {% do apply_grants(target_relation, grant_config, should_revoke) %} {% do persist_docs(target_relation, model) %}
{# 6. COMMIT AND RETURN #} {% do adapter.commit() %} {{ return({'relations': [target_relation]}) }}
{%- endmaterialization -%}The materialization block replaces the macro block you’d use for a regular macro. The second argument — adapter='default' or adapter='bigquery' — scopes the materialization to a specific adapter. Use default when your logic is warehouse-agnostic, or specify an adapter when it relies on warehouse-specific DDL.
The structure is rigid for a reason. dbt’s runtime expects certain things to happen at certain points. Pre-hooks fire before the main SQL. Post-hooks fire after. Grants and docs get applied at cleanup. If you skip steps or reorder them, you get surprising failures or silently missing permissions.
Key Context Objects
Four objects are available inside every materialization:
this is the target relation — the fully qualified database.schema.model_name path where the model will be materialized. Use this.incorporate(type='table') to explicitly set the relation type. This matters because the same model name might have previously been materialized as a view, and you need to handle that mismatch.
sql contains the compiled SELECT statement from the model file. This is the user’s SQL after all Jinja rendering is complete — refs resolved, macros expanded, is_incremental() evaluated. Your materialization wraps this SQL in whatever DDL makes sense (CREATE TABLE AS, INSERT INTO, MERGE, etc.).
config holds model configuration — anything the user passed in the config() block. Access values with config.get('my_setting', default_value). This is how you make materializations configurable: the user defines behavior in their model, and the materialization reads it.
adapter provides database-specific methods for manipulating relations. This is your primary interface for DDL operations.
Adapter Methods You’ll Use Constantly
The adapter object and a few helper functions handle most of what a materialization needs:
{# Check if the relation already exists in dbt's cache #}{% set existing = load_cached_relation(this) %}
{# Get column info from a relation #}{% set columns = adapter.get_columns_in_relation(target_relation) %}
{# Drop a relation #}{% do adapter.drop_relation(old_relation) %}
{# Rename a relation #}{% do adapter.rename_relation(temp_relation, target_relation) %}
{# Create a temporary relation name based on the target #}{% set temp_relation = make_temp_relation(target_relation) %}
{# Create a backup relation name based on the target #}{% set backup_relation = make_backup_relation(target_relation) %}
{# Commit the transaction #}{% do adapter.commit() %}load_cached_relation() returns the relation if it exists in dbt’s metadata cache, or none if it doesn’t. This is how you detect first-run versus subsequent runs — a critical distinction because your materialization needs to handle both cases. On first run, there’s nothing to drop, rename, or validate against. On subsequent runs, you need to decide what to do with the existing table.
The statement() context manager executes SQL and registers it with dbt’s logging system. Use named statements like statement('main') for the primary build and statement('validate', fetch_result=True) when you need to retrieve query results. The fetch_result=True flag stores the result so you can access it later with load_result('validate').
{% call statement('validate', fetch_result=True) %} SELECT COUNT(*) AS row_count FROM {{ temp_relation }}{% endcall %}
{% set row_count = load_result('validate')['data'][0][0] %}This pattern — execute a query, fetch the result, use it for conditional logic — is how materializations implement validation steps, row counts, schema comparisons, or any logic that depends on the state of the data.
Handling Type Mismatches
One pattern you’ll repeat across almost every custom materialization: checking whether the existing relation matches the type you expect. If a model was previously materialized as a view and you switch it to a custom table materialization, the existing relation is a view, not a table. Trying to rename or swap it as if it were a table will fail.
{% if existing_relation is not none and existing_relation.type != 'table' %} {% do adapter.drop_relation(existing_relation) %} {% set existing_relation = none %}{% endif %}Drop the mismatched relation and treat the situation as a first run. This is defensive code that prevents a class of errors that otherwise only appear when someone changes a model’s materialization type in a project that’s been running for months.
Where Materializations Live
Place materialization files in macros/materializations/ in your dbt project. The file name should match the materialization name — zero_downtime_table.sql for materialization zero_downtime_table. dbt discovers them automatically; no registration step is needed.
You can scope a materialization to a specific adapter by setting adapter='bigquery' (or any other adapter name). If you want a default implementation that works across adapters with specific overrides, use the dispatch pattern: create a default implementation and adapter-specific variants that dbt selects automatically at runtime.
How This Connects to Macros
Materializations are technically macros — they use Jinja, they live in .sql files, and they access the same context objects. The difference is the materialization block declaration and the implicit contract with dbt’s runtime. A macro generates SQL fragments that get stitched into a model. A materialization controls the entire lifecycle of how a model gets built, from checking existing state through creating the final relation.
The same macro best practices apply: keep logic readable, use {{ log() }} for debugging, and check target/compiled/ to see what SQL your materialization actually generated. The compiled output is your best debugging tool because it shows exactly what dbt sent to the warehouse.