dbt packages and dbt Mesh solve different problems, though they overlap at the edges. The confusion is understandable — both involve sharing dbt code or data between projects. The distinction is fundamental: packages share logic, Mesh shares data products.
Packages: Code Sharing
When someone installs your package, they get the full source code — macros, models, tests. The code runs inside their project and compiles against their warehouse. The user’s dbt runtime processes everything as if it were part of their own project.
# packages.yml — the code lives in YOUR project after dbt depspackages: - package: fivetran/stripe version: [">=0.12.0", "<1.0.0"]After dbt deps, the Fivetran Stripe package’s models and macros are physically present in the user’s dbt_packages/ directory. When dbt runs, it compiles those models against the user’s warehouse credentials, using the user’s source data. The package author has no visibility into how the package is used, what data it processes, or whether it succeeds.
Good fits for the package model:
- Open-source source packages (Fivetran, Airbyte) that transform raw connector output into usable models
- Internal utility libraries — shared macros, custom generic tests, schema generation overrides
- Reusable model templates that work across different organizations’ data
- Generic testing packages (dbt-utils, dbt-expectations) that extend dbt’s built-in testing
Mesh: Data Product Sharing
dbt Mesh (via dependencies.yml and cross-project refs) is data product sharing. Teams reference each other’s published models without installing source code. A marketing team can query ref('finance', 'mrt__finance__monthly_revenue') without knowing how that model is built, what sources it uses, or what intermediate steps exist.
# dependencies.yml — you reference their DATA, not their codeprojects: - name: finance dbt_cloud: project_id: 12345-- A model in the marketing projectSELECT *FROM {{ ref('finance', 'mrt__finance__monthly_revenue') }}WHERE region = 'EMEA'The finance team’s model runs in the finance project. The marketing team gets a table reference, not source code. The finance team controls access via model access levels (public, protected, private) and can change their internal implementation without affecting consumers — as long as the public model’s schema stays stable.
Mesh requires dbt Cloud Enterprise and model access controls. It’s not available in dbt Core or dbt Cloud Team plans.
Good fits for the Mesh model:
- Cross-team data products where one team curates models that other teams consume
- Domain boundaries — finance, marketing, product, and platform teams each own their models
- Data contracts — published models with enforced schemas that consumers depend on
- Large organizations where a monolithic dbt project has become unmanageable
The Decision Framework
| Question | If yes → |
|---|---|
| Are you sharing reusable logic (macros, generic tests, model templates)? | Package |
| Are you sharing data products (curated models with defined contracts)? | Mesh |
| Does the consumer need the source code? | Package |
| Should the producer control how the model is built and accessed? | Mesh |
| Is this open-source, shared with the community? | Package |
| Is this between teams within one organization? | Could be either — Mesh if on dbt Cloud Enterprise, package otherwise |
| Do consumers need to customize the models? | Package |
| Should consumers get a stable interface without implementation details? | Mesh |
The clearest distinction: packages are libraries, Mesh is an API. Installing a package is like adding a Python library to your project — you get the code, you run it, you can read and modify it. Using Mesh is like calling an API — you get data back through a defined interface, and the implementation is someone else’s concern.
Where They Overlap
The gray area is internal model sharing between teams. A platform team that builds curated base models used by every other team could distribute them as:
- A package — every team installs the models, runs them in their own project, and gets the code. The platform team releases versions and teams upgrade at their own pace.
- A Mesh project — the platform team runs the models centrally and publishes them as data products. Other teams reference the output without running the transformation logic.
The package approach gives teams more autonomy (they can pin versions, override configurations, even fork if needed). The Mesh approach gives the platform team more control (they know exactly what’s running, can enforce contracts, and can change internals without a version bump).
The dbt-meshify Tool
For teams exploring Mesh, the dbt-meshify CLI tool helps split a monolithic project into interconnected projects. It analyzes your existing DAG, identifies natural domain boundaries, and generates the dependencies.yml and access configurations needed for Mesh.
This is useful because the hardest part of adopting Mesh isn’t the technical configuration — it’s deciding where to draw the project boundaries. dbt-meshify uses your model relationships and metadata to suggest boundaries based on actual usage patterns rather than organizational assumptions.
They’re Not Mutually Exclusive
A mature dbt setup often uses both:
- Packages for shared utilities — macros, custom tests, schema generation overrides. Every project in the organization installs the same internal utils package.
- Mesh for data product sharing — the finance team publishes revenue models, the marketing team consumes them without running finance transformations.
The package pattern handles code reuse. Mesh handles data product boundaries. They solve different coordination problems and complement each other in organizations large enough to need both.