The dbt package ecosystem: how it works and what's available

The dbt package system is one of those things that feels simple until it isn’t. You add a few lines to packages.yml, run dbt deps, and suddenly you have access to hundreds of macros and tests you didn’t have to write yourself. But when two packages depend on different versions of dbt-utils, or a macro silently breaks on your warehouse, the simplicity disappears fast.

Over 400 packages now live in the dbt ecosystem, maintained by dbt Labs, Fivetran, and a growing community of independent contributors. This guide covers how the package system actually works, what’s worth installing, and what to watch out for.

How packages actually work

Two configuration files

The traditional packages.yml is where most teams declare their dependencies. It supports Jinja rendering, which means you can use env_var() to inject tokens for private repositories.

The newer dependencies.yml was introduced for dbt Mesh. It consolidates package dependencies with cross-project refs, but it doesn’t support Jinja rendering. If you need private packages, stick with packages.yml. If you’re working with cross-project references in dbt Mesh, use dependencies.yml. They serve different purposes.

What happens when you run dbt deps

Running dbt deps resolves all declared packages and installs them into a dbt_packages/ directory. Since dbt 1.7, a package-lock.yml file is automatically generated, recording exact resolved versions (including Git commit SHAs). Commit this file to version control for reproducible builds.

A few useful flags:

  • dbt deps --upgrade forces a fresh resolution, ignoring the lock file
  • dbt deps --lock updates the lock file without installing
  • dbt deps --add-package adds a package from the CLI with a --source flag for hub, git, or local

Three installation types

Hub packages are the recommended default. They’re installed from hub.getdbt.com with semantic versioning:

packages:
- package: dbt-labs/dbt_utils
version: [">=1.0.0", "<2.0.0"]

Hub packages have one key advantage over the alternatives: automatic duplicate dependency resolution. If two packages both depend on dbt-utils, dbt reconciles the versions automatically.

Git packages install from any Git repository via HTTPS or SSH, pinned to a tag, branch, or commit SHA:

packages:
- git: "https://github.com/org/my-package.git"
revision: v0.2.0

A newer private: key supports native authentication through GitHub, GitLab, or Azure DevOps integration, so you don’t need to embed tokens. Git packages can’t deduplicate transitive dependencies, which makes version conflicts more likely.

Local packages install from filesystem paths via symlinks. They’re useful for monorepos and for testing packages before publishing:

packages:
- local: ../my-local-package

The dbt Hub

The Hub at hub.getdbt.com hosts 350-400+ packages organized by publisher namespace (dbt-labs/, fivetran/, elementary-data/, calogica/). It’s operated by dbt Labs as a community courtesy, but it explicitly does not certify the integrity or security of listed packages. Think of it as a registry, not a seal of approval.

Behind the scenes, a script called hubcap runs hourly against GitHub to detect new package versions from releases. There’s no formal category taxonomy: discovery relies on search and browsing by publisher.

The Fusion compatibility badge is a recent addition, appearing on packages verified to work with dbt’s new Rust-based engine (v2.0.0+). If you’re planning a Fusion migration, these badges save you time.

What’s available: a practical tour

Utility packages

dbt-utils is the foundation. With 50+ macros for SQL generation, generic tests, and cross-database compatibility, nearly every dbt project depends on it. Current version is 1.3.3, compatible with dbt Core 1.x and Fusion 2.x. If you only install one package, make it this one. I cover its macros in detail in my guide to essential dbt macros.

dbt-codegen generates boilerplate: source YAML from database metadata, base model SQL, and model YAML documentation. It’s essential for bootstrapping new projects and saves hours of manual work.

dbt-date provides date dimension generation, fiscal calendars, and timezone utilities. You’ll often get it as a transitive dependency of dbt-expectations.

Data quality and testing

dbt-expectations ports the Great Expectations framework to dbt with 40+ test macros covering value ranges, statistical tests, date completeness, regex patterns, and more. I wrote a dedicated guide to dbt-expectations if you want the full picture. The Hub listing is now under metaplane/dbt_expectations (v0.10.10).

Elementary takes a different approach: data observability as a dbt package. It handles anomaly detection, schema change tracking, and volume monitoring by storing artifacts and test results directly in your warehouse. Recent versions include AI-powered data validation that lets you define tests in natural language. See my Elementary setup guide for a walkthrough.

dbt-audit-helper provides macros for comparing relations, queries, and column values. It’s the go-to tool for migration and refactoring validation: run compare_relations between your legacy table and new dbt model, and it tells you exactly where they differ. I go deeper in my audit-helper validation guide.

dbt-project-evaluator lints your DAG against best practices: naming conventions, model layering, test coverage, documentation coverage. It reached v1.0.0 with JSON/CSV output, making it easy to integrate into CI pipelines.

Source-specific packages

Fivetran dominates this category with 60+ packages spanning ad platforms, CRM, marketing automation, finance, product analytics, social media, HR, and dev tools. In 2024-2025, they unified previously separate _source and _transform packages into single packages per connector, so you don’t need to install both anymore. All are marked Fusion-compatible.

The architecture follows a consistent pattern: raw data from Fivetran connectors flows into the unified package (which handles both base and transform models), then optionally into cross-platform bundles, and finally into your custom dbt models. When using a cross-platform bundle like ad_reporting, don’t install individual platform packages separately; the bundle installs them as dependencies.

Some highlights:

CategoryPackages
Ad platformsFacebook Ads, Google Ads, LinkedIn, Microsoft Ads, TikTok Ads, Pinterest, Snapchat Ads, Twitter Ads, Amazon Ads, Apple Search Ads, Reddit Ads
CRM & salesSalesforce, HubSpot, Dynamics 365
FinanceQuickBooks, NetSuite, Xero, Sage Intacct, Zuora, Stripe
Marketing automationMarketo, Klaviyo, Mailchimp, Pardot, Iterable

Fivetran also publishes cross-platform bundles like ad_reporting (all 11 ad platforms unified into one schema) and social_media_reporting (Facebook Pages, Instagram, LinkedIn Pages, Twitter, YouTube).

If you’re comparing ingestion tools, I covered the Fivetran vs Airbyte vs dlt comparison in a separate article. On the dbt package front specifically, Airbyte’s ecosystem is significantly less mature. The community-maintained airbyte-dbt-models monorepo exists but lacks unified cross-platform reporting. Most models require customization.

Documentation and operations

dbt-osmosis propagates column descriptions downstream through the DAG and auto-generates YAML docs via CLI. dbt-profiler profiles relations and generates doc blocks with statistics like min, max, distinct count, and null rate. dbt-coverage checks documentation and test coverage percentages and can enforce thresholds in CI.

For performance monitoring, dbt-artifacts captures run artifacts into warehouse tables, letting you track build times and identify slow models over time.

Fusion and version compatibility

The dbt version landscape matters for package selection:

VersionKey Package Impact
dbt Core 1.7package-lock.yml introduced
dbt Core 1.8Native unit_tests: (partially supersedes community packages), tests: renamed to data_tests:
dbt Core 1.9-1.11Latest stable releases with snapshot YAML config, source freshness improvements
dbt Fusion 2.0.0Rust rewrite, 30x faster, incompatible manifest format (v20 vs v12)

Fusion affects packages more than any other recent change. To work with Fusion, packages must set require-dbt-version: ">=1.10.0,<3.0.0" to include the 2.0.0 range. The dbt-autofix tool can help update deprecated configurations automatically. Most major packages (dbt-utils, all Fivetran packages, audit-helper) are already Fusion-compatible, but check the Hub badge before assuming.

For your own packages.yml, follow these practices:

  • Pin to minor version ranges: [">=1.0.0", "<2.0.0"]
  • Commit package-lock.yml to version control
  • Prefer Hub packages over Git when possible (for automatic deduplication)
  • Remove transitive dependencies from your root packages.yml to avoid conflicts
  • Check require-dbt-version when evaluating new packages

Common pitfalls

Version conflicts are the number one issue. Two packages depending on different versions of dbt-utils will block installation. A typical mistake looks like this:

# packages.yml - this will cause problems
packages:
- package: dbt-labs/dbt_utils
version: [">=1.1.0", "<1.2.0"] # pinned too tightly
- package: fivetran/ad_reporting
version: [">=2.0.0", "<3.0.0"] # depends on dbt-utils >=1.0.0, <2.0.0

The fix: remove dbt_utils from your root packages.yml entirely and let ad_reporting pull it in as a transitive dependency. Fivetran explicitly warns users about this. Only list packages you directly use in your models, not their dependencies.

Dispatch macro errors happen when the dispatch search_order references a package that has no macros for your adapter. You’ll see Compilation Error In dispatch: Could not find package. Non-core adapters like SQL Server, Spark, and Databricks frequently need explicit dispatch config in your dbt_project.yml:

# dbt_project.yml - required for Spark/Databricks
dispatch:
- macro_namespace: dbt_utils
search_order: ['spark_utils', 'dbt_utils']

This tells dbt to look for Spark-specific implementations in spark_utils before falling back to the default dbt_utils macros.

Adapter compatibility gaps vary by package. dbt-expectations doesn’t natively support Spark without the spark_utils shim. Some packages have warehouse-specific bugs between releases that only surface when you upgrade. Always check the package’s README for supported warehouses before installing.

Fusion migration surprises include packages with require-dbt-version: "<2.0.0" that simply won’t install with Fusion, and manifest incompatibility between Fusion’s v20 format and Core’s v12. If you’re testing Fusion, run dbt-autofix first and verify each package’s compatibility.

Packages vs dbt Mesh

Packages and dbt Mesh are often confused, but they solve different problems.

Packages are code sharing: you install the full source code as a library into your project. Think of them like npm packages or Python libraries.

Mesh (via dependencies.yml) is data product sharing: you reference public models across projects without installing their source code. This requires dbt Cloud Enterprise and uses model access modifiers (private, protected, public) and contracts to treat models as stable data APIs. The dbt-meshify CLI tool can help split a monolithic project into a Mesh architecture.

In practice: packages.yml installs code, dependencies.yml references data products. They coexist but serve different audiences and use cases.

Who maintains what

Understanding who’s behind a package tells you a lot about its reliability and longevity.

dbt Labs maintains the core utility packages (dbt-utils, dbt-codegen, dbt-audit-helper, dbt-project-evaluator), operates the Hub, and defines the package specification. These are the safest bets for long-term support.

Fivetran is the largest non-dbt-Labs contributor with five full-time staff dedicated to dbt package development, maintaining 100+ packages under Apache 2.0 licenses. The October 2025 merger between dbt Labs and Fivetran signals even deeper integration ahead, though community concerns about long-term openness of dbt Core persist.

Community contributors include Calogica/Metaplane (dbt-expectations, dbt-date), Elementary Data (observability), and individual contributors behind dbt-osmosis, dbt-coverage, and dbt-artifacts. Quality varies with no standardized review process. The Fusion compatibility badge provides some signal, but for community packages, check GitHub activity, issue response times, and how recently the package was updated before committing to it in production.

There’s no formal governance body for the ecosystem. The Hub is a registry, not a curated marketplace. Evaluate each package the way you’d evaluate any open-source dependency: look at the maintainers, the license, the test coverage, and whether anyone will still be merging PRs six months from now. If you want to contribute your own, I wrote a guide to building and publishing dbt packages.