dbt supports six constraint types: not_null, unique, primary_key, foreign_key, check, and custom. The complication is that enforcement depends entirely on your warehouse. Declaring a constraint in your YAML does not mean the warehouse will reject bad data. This mismatch trips up nearly every team adopting dbt model contracts for the first time.
Enforcement by Platform
Postgres enforces all six constraint types. Every constraint you declare will actually reject invalid data at insert time. If you’re on Postgres, constraints behave exactly as you’d expect from a relational database perspective.
Snowflake only enforces not_null. Primary keys, unique constraints, and foreign keys are accepted in the DDL but treated as metadata only. They won’t reject bad data. Snowflake does use primary key and unique constraint metadata for query optimization (via the rely keyword), so declaring them isn’t pointless — but they’re not protective.
BigQuery follows a similar pattern to Snowflake: not_null is enforced, everything else is informational. Primary key and foreign key constraints exist as metadata for query optimization and documentation purposes.
Redshift mirrors the BigQuery behavior: not_null is enforced, other constraints are accepted in DDL but not validated at insert time.
Databricks enforces not_null and check constraints, but applies them via ALTER TABLE after table creation. If a constraint fails on Databricks, the table with the offending data still exists in the warehouse. This is a subtle but important difference from the fail-fast behavior of the dbt preflight check — the data lands before the constraint catches it.
The Enforcement Matrix
| Constraint | Postgres | Snowflake | BigQuery | Redshift | Databricks |
|---|---|---|---|---|---|
not_null | Enforced | Enforced | Enforced | Enforced | Enforced |
unique | Enforced | Metadata only | Metadata only | Metadata only | Not supported |
primary_key | Enforced | Metadata only | Metadata only | Metadata only | Not supported |
foreign_key | Enforced | Metadata only | Metadata only | Metadata only | Not supported |
check | Enforced | Not supported | Not supported | Not supported | Enforced (post-create) |
custom | Varies | Varies | Varies | Varies | Varies |
The practical takeaway: not_null is the only constraint you can rely on across every major platform. Everything else requires verification through dbt tests.
The Defensive Pattern
In practice, you declare constraints for metadata and optimization, then pair them with dbt tests for actual validation:
columns: - name: customer__id data_type: integer constraints: - type: not_null - type: primary_key warn_unenforced: false data_tests: - unique - not_nullSetting warn_unenforced: false tells dbt you understand the constraint isn’t enforced on your platform and you’re handling it with tests instead. Without this flag, dbt generates a warning on every build, which creates noise that obscures real issues.
The duplication is intentional. The constraint documents the intent (this column is a primary key) and enables query optimization. The test validates the reality (this column actually has unique, non-null values). On Postgres, both mechanisms enforce the same rule. On Snowflake or BigQuery, only the test provides protection.
Custom Constraints
Custom constraints let you attach platform-specific expressions. On Snowflake, for instance, you can apply a masking policy directly in the contract:
constraints: - type: custom expression: "masking policy my_policy"This is useful for embedding governance policies in your model definitions. The masking policy is applied through DDL, so it’s enforced by the warehouse regardless of the constraint enforcement limitations. Other common uses include row access policies, column-level encryption, and platform-specific data governance features.
Custom constraints are the escape hatch for anything dbt’s six standard constraint types don’t cover. They pass the expression directly to the DDL, so you’re responsible for ensuring the expression is valid for your target warehouse.
Why This Matters for Contract Strategy
Understanding enforcement behavior shapes how you approach contracts in practice:
-
On Postgres: Constraints alone provide strong protection. Tests add defense-in-depth but aren’t strictly necessary for what constraints already cover.
-
On Snowflake, BigQuery, Redshift: Constraints are documentation and optimization aids. Tests are your only enforcement mechanism for uniqueness, referential integrity, and value validation. Never rely on a constraint for data protection on these platforms.
-
On Databricks: The post-creation enforcement model means you might briefly have bad data in tables before constraints catch it. Tests running in the same build will catch the same issues, but downstream models that execute between the table creation and constraint check could see the invalid data.
The three-layer validation model applies here with special force. Contracts handle structure (column existence and types). Constraints handle a subset of integrity checks (reliably only not_null). Tests handle everything else. Treating constraints as enforcement rather than metadata on cloud warehouses is one of the most common mistakes in dbt contract adoption.