A dbt package is a dbt project — same files, same structure, same execution model, with dbt_project.yml as the only mandatory file. The difference is intent: a package is designed to be installed into someone else’s project via dbt deps. This changes every design decision. In a regular project, hardcoding a schema name is fine. In a package, it breaks the moment another user installs it.
Three Principles
Three principles separate a well-built package from a regular project:
- Configurable. No hardcoded database names, schema references, or table identifiers. Everything users might need to customize goes through
var(). See dbt Packageable Model Patterns for implementation details. - Namespaced. Model names include the package prefix to avoid collisions.
my_package__customers, notcustomers. When a user has five packages installed, generic names likecustomersordaily_summarywill collide. - Adapter-aware. SQL that differs across warehouses uses
adapter.dispatch()so the package works on Snowflake, BigQuery, Redshift, and others. The dispatch pattern handles this cleanly.
These principles aren’t aspirational — they’re the minimum bar for a package that other people can actually use. Fivetran maintains over 100 packages that all follow this pattern. dbt Labs’ own packages do the same. The pattern is proven at scale.
Standard Directory Structure
The layout used by dbt Labs and Fivetran for their own packages has become the community standard:
dbt-my_package/├── dbt_project.yml # Required: package configuration├── packages.yml # Upstream dependencies├── macros/│ ├── my_macro.sql│ └── _macros.yml # Macro documentation├── models/│ ├── base/│ └── marts/├── tests/generic/ # Custom generic tests├── integration_tests/ # Sub-project for testing│ ├── dbt_project.yml│ ├── packages.yml # References parent via local: ../│ ├── seeds/ # Mock data│ ├── models/│ └── tests/├── .github/workflows/ # CI configuration├── README.md├── CHANGELOG.md└── LICENSEThe integration_tests/ sub-directory is the most distinctive piece. Unlike regular projects that test in place, a package can’t test itself in isolation — it’s designed to be installed inside another project. The integration tests pattern solves this with a sub-project that installs the parent package as a local dependency.
The model organization inside a package follows the same three-layer architecture as a regular project, but with a flatter structure. Most packages skip the intermediate layer entirely since they’re providing building blocks, not full-stack transformations.
dbt_project.yml for Packages
The project file carries a few settings that matter specifically for packages:
name: 'my_package'version: '0.1.0'require-dbt-version: [">=1.3.0", "<3.0.0"]
config-version: 2
models: my_package: +materialized: view
vars: my_package_schema: 'my_data' my_package_database: null my_package__some_model_enabled: trueVersion Bounds
The require-dbt-version range should include both dbt Core 1.x and Fusion 2.x (dbt 2.0). Setting the upper bound to <3.0.0 covers both runtimes. Without this, users on incompatible dbt versions get cryptic compilation errors instead of a clear “incompatible version” message.
Default Materialization
Default materialization should be view, not table. When someone runs dbt deps && dbt run, your package shouldn’t create 30 physical tables in their warehouse. Users can always override to table for performance in their own dbt_project.yml:
models: my_package: +materialized: tableThis is the opposite of the guidance for regular projects, where tables are the recommended default. The difference is ownership: in your own project, you want debugging visibility. In someone else’s project, you want a light footprint.
Variable Defaults
Every configurable option should have a sensible default declared under vars. The naming convention is {package_name}_{setting} for schema/database settings and {package_name}__{model}_enabled for feature flags (double underscore to separate package name from the setting name).
vars: my_package_schema: 'my_data' # Where source data lives my_package_database: null # null = use target.database my_package__daily_summary_enabled: true # Toggle individual models my_package_events_identifier: 'events' # Table name overridesDeclaring defaults here means users only need to override the settings that differ from their environment. A user whose events table is called raw_events sets one var; everyone else gets the default.
What About the Package Name?
The name field in dbt_project.yml becomes the namespace for everything in your package. It prefixes model references, variable names, and source definitions. Choose it carefully because changing it after publication is a breaking change for every user.
Use snake_case, keep it short, and make it descriptive. revenue_tools is better than my_company_revenue_analytics_package_v2. The name should tell someone what the package does in two or three words.
When a Regular Project Becomes a Package
Most packages start as code you’ve already written and battle-tested across your own projects. The packaging step is mostly about:
- Replacing hardcoded references with
var()calls - Prefixing model names with the package name
- Adding dispatch implementations for warehouse portability
- Building an integration test suite
- Writing documentation
You don’t need to build a package from scratch. Extract what already works, make it configurable, and verify it with tests. The Hub is a single PR away when you’re ready, but a Git package shared across your team’s projects is a perfectly good starting point.