Pre-commit hooks catch problems during local development, but they can be bypassed (a --no-verify flag, a commit from a CI bot, a developer who hasn’t installed the hooks). CI enforcement provides a second layer that runs on every pull request regardless of local setup.
Two tools handle this differently: dbt-project-evaluator treats documentation gaps as queryable data, while dbt_meta_testing enforces requirements through dbt’s native config system.
dbt-project-evaluator
dbt-project-evaluator is maintained by dbt Labs and works by materializing your DAG as models, then running tests against those models. You install it as a package and run:
dbt build --select package:dbt_project_evaluatorFor documentation, three models matter:
fct_undocumented_modelslists every model without a description. Query it directly to get a prioritized remediation list.fct_undocumented_sourcesdoes the same for sources — often the most neglected documentation in a project.fct_documentation_coveragecalculates coverage percentages by model type (view, table, incremental), giving you a breakdown of where gaps are concentrated.
By default, these models produce warnings when coverage is low. To make undocumented models actually fail your CI pipeline, set the coverage target:
vars: dbt_project_evaluator: documentation_coverage_target: 100Or control severity across all dbt-project-evaluator rules at once with the DBT_PROJECT_EVALUATOR_SEVERITY environment variable. Setting it to error means any rule violation blocks the build.
Why materializing coverage matters
The key difference from tools like dbt-coverage is that dbt-project-evaluator creates actual models in your warehouse. This means you can:
- Query
fct_undocumented_modelsto build dashboards tracking documentation debt - Join coverage data with other project metadata (who owns the model, when it was last modified)
- Use dbt tests on the evaluator models themselves for fine-grained alerting
- Track coverage trends by snapshotting the evaluator output over time
This is more powerful than a binary pass/fail CI check, especially for teams managing documentation debt incrementally. You can see exactly which models are undocumented, sort by business criticality, and assign remediation work.
dbt Cloud built-in coverage
dbt Cloud users get a subset of this for free. The Project Recommendations page displays documentation and test coverage percentages using dbt-project-evaluator rules under the hood. No additional setup required — just check the dashboard. This won’t fail your CI pipeline, but it gives visibility into coverage trends without installing anything.
dbt_meta_testing
dbt_meta_testing takes a different approach: instead of materializing models about your project, it enforces documentation requirements through dbt’s native configuration system. You declare what’s required directly in dbt_project.yml:
models: your_project: marts: +required_docs: trueThen run the check:
dbt run-operation required_docsThis verifies that all models in marts/ have:
- A model-level description
- Descriptions for every column listed in the YAML
- All warehouse columns present in the YAML (no missing columns)
That last point is critical. It catches the case where someone adds a column in SQL but forgets to add it to the schema file — the documentation looks complete until you compare it against what’s actually in the warehouse.
Folder-level granularity
The power of dbt_meta_testing is its alignment with dbt’s folder structure. You can set different requirements by layer:
models: your_project: staging: +required_docs: false # Don't require docs for base/staging intermediate: +required_docs: false # Internal implementation details marts: +required_docs: true # External-facing models must be documentedThis graduated approach reflects reality: mart models that analysts query directly need thorough documentation, while intermediate models that only other models reference can be documented more lightly. Requiring 100% documentation everywhere creates friction that slows development without proportional benefit.
Choosing between them
| Feature | dbt-project-evaluator | dbt_meta_testing |
|---|---|---|
| Maintained by | dbt Labs | Community |
| Enforcement mechanism | Models + tests | Config + run-operation |
| Coverage as queryable data | Yes | No |
| Folder-level granularity | Limited (via test selectors) | Native (via dbt config) |
| Checks warehouse columns | No | Yes |
| Works with dbt Cloud | Yes (built-in recommendations) | Yes |
| Scope | Documentation + testing + DAG structure | Documentation + testing only |
For most teams, dbt-project-evaluator is the better starting point because it’s officially maintained and covers more ground (DAG structure, testing coverage, naming conventions). Add dbt_meta_testing when you need strict folder-level documentation requirements or warehouse-column validation that dbt-project-evaluator doesn’t provide.
Both complement pre-commit hooks rather than replacing them. Pre-commit hooks give developers fast feedback during development. CI enforcement catches anything that slips through — including changes from team members who haven’t configured their local hooks.