The dispatch pattern routes macro calls to adapter-specific implementations automatically. But the search order — where dbt looks for those implementations and in what sequence — is configurable. This configuration lives in dbt_project.yml and becomes essential when you’re using packages, overriding package behavior, or adding warehouse support that a package doesn’t natively provide.
The Search Order
When you call adapter.dispatch('my_macro'), dbt looks for implementations in this default order:
{adapter}__my_macro(e.g.,bigquery__my_macro){parent_adapter}__my_macro(for adapters that inherit from others)default__my_macro
This happens within a single namespace — your project or a specific package. The dispatch config in dbt_project.yml controls which namespaces dbt searches and in what order:
dispatch: - macro_namespace: dbt_utils search_order: - my_project # Check your project first - spark_utils # Compatibility shim for Spark/Databricks - dbt_utils # Original packageThe macro_namespace says “when resolving macros from dbt_utils” and search_order says “look in these projects, in this order.” dbt checks each project in the list for an adapter-specific implementation before moving to the next project.
Overriding Package Behavior
The most common use case: a package macro doesn’t work quite right for your setup. Maybe dbt_utils.generate_surrogate_key uses a hashing algorithm you need to change, or a date macro needs a timezone adjustment for your warehouse configuration.
Put your project first in the search order:
dispatch: - macro_namespace: dbt_utils search_order: - my_project - dbt_utilsThen write an adapter-specific override in your project’s macros/ directory:
-- macros/bigquery__generate_surrogate_key.sql{% macro bigquery__generate_surrogate_key(field_list) %} TO_HEX(SHA256(CONCAT( {% for field in field_list %} COALESCE(CAST({{ field }} AS STRING), '_null_') {{ ',' if not loop.last }} {% endfor %} ))){% endmacro %}Because my_project appears before dbt_utils in the search order, dbt uses your implementation on BigQuery while still falling back to the package’s default for other adapters.
This is a powerful escape hatch, but use it deliberately. Every override is a maintenance burden — when the package updates, your override doesn’t get the fix automatically. Document why you overrode and check whether the upstream fix resolves your issue on each package upgrade.
Adding Databricks Support via spark_utils
Many dbt packages were written before Databricks was common. They have implementations for BigQuery, Snowflake, Redshift, and a default — but nothing for Spark/Databricks. The spark_utils package fills this gap by providing Spark-compatible implementations of common macros.
Add it to your packages.yml:
packages: - package: dbt-labs/spark_utils version: [">=0.3.0", "<0.4.0"]Then configure the search order so dbt checks spark_utils before falling back to the package’s default:
dispatch: - macro_namespace: dbt_utils search_order: - my_project - spark_utils - dbt_utilsWithout this configuration, Databricks users hit the default__ implementation, which is usually written for PostgreSQL-style syntax and fails on Spark. With spark_utils in the search order, dbt finds the spark__ implementations that handle Databricks’ SQL dialect correctly.
Namespace in adapter.dispatch()
When writing your own dispatched macros, the second argument to adapter.dispatch() controls which namespace the macro belongs to:
{% macro my_macro(args) %} {{ return(adapter.dispatch('my_macro', 'my_project')(args)) }}{% endmacro %}The 'my_project' string ties this macro to your project’s namespace. This matters because:
- Other projects can override it. If someone installs your package and adds your package to their dispatch search order, they can provide their own adapter-specific implementations.
- The search order applies. Without the namespace argument, dbt only searches the current project for implementations. With it, dbt respects the full
dispatchconfiguration fromdbt_project.yml.
If you’re writing macros for a package, always include the namespace. If you’re writing project-internal macros that nobody else will override, the namespace is optional but still good practice for consistency.
Multiple Namespace Configurations
You can configure dispatch for multiple packages independently:
dispatch: - macro_namespace: dbt_utils search_order: - my_project - spark_utils - dbt_utils - macro_namespace: dbt_expectations search_order: - my_project - dbt_expectationsEach entry controls a specific package’s resolution. This lets you override dbt_utils macros without affecting how dbt_expectations resolves its macros.
When You Don’t Need Dispatch Configuration
If you’re not using packages with dispatch macros, or you’re only running on a single warehouse with no plans to change, you don’t need any of this. The dispatch configuration is specifically for:
- Overriding package macro behavior
- Adding warehouse support to packages that lack it
- Writing packages intended for multi-warehouse use
For project-internal macros on a single database, plain macros without dispatch are simpler and easier to maintain. Add dispatch when you have a concrete reason, not preemptively. See SQL Dialect Divergences Across Warehouses for when those reasons tend to appear.