Notes on the human side of dbt documentation: audience mismatch, description writing patterns, and documentation quality as infrastructure for AI tools. For generating documentation (scaffolding, AI tools, RAG), see the AI-powered documentation hub. For keeping documentation accurate over time, see the documentation freshness hub.
Reading Order
-
dbt Documentation Audience Mismatch — Why dbt docs go unread: teams write for engineers, not for analysts and business users. Covers the interface, scope, and maintenance problems.
-
dbt Model Description Writing Patterns — A three-question framework for model descriptions (source system, grain, exclusions), column description patterns (units, timezones, relationships), source descriptions, and when to use
descriptionvsmeta. -
Documentation Quality Determines AI Usefulness — Documentation quality directly determines how useful AI tools can be. Covers the Roche chatbot failure, the bidirectional docs-to-AI feedback loop, and results from teams that invested in enforcement.
Delivery Mechanisms
Once you have good descriptions, getting them to the right people matters as much as the content:
- dbt Persist Docs for Warehouse Comments — Push descriptions to where analysts already work (BigQuery schema panel, Snowflake column comments, BI tool schema browsers). Often the single highest-leverage fix for adoption.
- dbt Doc Block Syntax and Reuse Patterns — Write descriptions once, reference everywhere. Particularly valuable when paired with persist_docs for consistent descriptions across models and warehouse comments.
- dbt Docs Site Customization Options — The
__overview__doc block, DAG node colors, and hiding intermediate models. Small changes that make the default docs site more navigable for non-technical users. - Alternatives to Default dbt Docs — When the default frontend isn’t enough: Dagster’s Next.js replacement, data catalogs, dbt Cloud Catalog.