CI/CD for dbt packages runs the integration test suite across every supported combination of warehouse and dbt version, catching adapter-specific failures and breaking changes from dbt Core releases. GitHub Actions with a matrix strategy is the standard approach.
Matrix Testing with GitHub Actions
The matrix strategy runs the same test suite across multiple dimensions — warehouses and dbt versions — in parallel:
name: CIon: [pull_request]
jobs: test: runs-on: ubuntu-latest strategy: matrix: warehouse: [snowflake, bigquery, postgres] dbt-version: ['1.9.0', '1.11.0'] steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.11' - run: pip install dbt-${{ matrix.warehouse }}==${{ matrix.dbt-version }} - run: | cd integration_tests/ dbt deps dbt seed --target ${{ matrix.warehouse }} dbt run --target ${{ matrix.warehouse }} dbt test --target ${{ matrix.warehouse }} env: SNOWFLAKE_ACCOUNT: ${{ secrets.SNOWFLAKE_ACCOUNT }} SNOWFLAKE_USER: ${{ secrets.SNOWFLAKE_USER }} SNOWFLAKE_PASSWORD: ${{ secrets.SNOWFLAKE_PASSWORD }} BIGQUERY_KEYFILE: ${{ secrets.BIGQUERY_KEYFILE }}This configuration creates 6 parallel jobs (3 warehouses x 2 dbt versions). Each job installs the appropriate dbt adapter, runs the full integration test suite, and reports pass/fail independently.
What the Matrix Catches
Each dimension catches different problems:
Warehouse dimension catches:
- SQL syntax differences (BigQuery’s
SAFE_DIVIDEvsCASE WHEN ... = 0) - Missing dispatch implementations for specific adapters
- Type system differences (BigQuery’s
INT64vs Snowflake’sNUMBER) - Function name differences (
DATEADDargument ordering across dialects)
dbt version dimension catches:
- Deprecated features you’re still using
- Behavior changes in built-in macros
- New dbt Core requirements or configuration changes
- Compatibility with both dbt Core 1.x and Fusion 2.x
Without the matrix, you’re shipping a package that’s only tested against one combination. Users on other combinations become your QA team.
Profile Configuration
The integration test project needs profiles that connect to each warehouse. These live in integration_tests/profiles.yml or are configured via environment variables:
integration_tests: target: postgres outputs: postgres: type: postgres host: "{{ env_var('POSTGRES_HOST', 'localhost') }}" user: "{{ env_var('POSTGRES_USER', 'dbt_test') }}" password: "{{ env_var('POSTGRES_PASSWORD') }}" port: 5432 dbname: dbt_test schema: dbt_test
snowflake: type: snowflake account: "{{ env_var('SNOWFLAKE_ACCOUNT') }}" user: "{{ env_var('SNOWFLAKE_USER') }}" password: "{{ env_var('SNOWFLAKE_PASSWORD') }}" role: TRANSFORMER database: DBT_TEST warehouse: COMPUTE_WH schema: dbt_test
bigquery: type: bigquery method: service-account project: "{{ env_var('BIGQUERY_PROJECT', 'my-test-project') }}" dataset: dbt_test keyfile: "{{ env_var('BIGQUERY_KEYFILE') }}"Each output matches a value in the matrix’s warehouse list. The --target flag in the CI step selects which profile to use.
Credential Management
Store warehouse credentials as GitHub Secrets. Never commit credentials, connection strings, or service account keys to the repository.
For BigQuery, the service account keyfile needs special handling since it’s a JSON file, not a simple string:
steps: - name: Write BigQuery keyfile if: matrix.warehouse == 'bigquery' run: echo '${{ secrets.BIGQUERY_KEYFILE_JSON }}' > /tmp/bigquery-keyfile.json env: BIGQUERY_KEYFILE: /tmp/bigquery-keyfile.jsonStore the entire JSON key as a secret (BIGQUERY_KEYFILE_JSON) and write it to a temporary file in the CI step. Set BIGQUERY_KEYFILE to the file path so dbt can read it.
For Snowflake, key-pair authentication is more secure than password-based authentication in CI:
snowflake: type: snowflake account: "{{ env_var('SNOWFLAKE_ACCOUNT') }}" user: "{{ env_var('SNOWFLAKE_USER') }}" private_key_path: "{{ env_var('SNOWFLAKE_KEY_PATH') }}" role: TRANSFORMER database: DBT_TEST warehouse: COMPUTE_WH schema: dbt_testSchema Isolation in CI
Multiple CI runs hitting the same warehouse can collide if they write to the same schema. Use dynamic schema names based on the run ID:
- run: | cd integration_tests/ dbt deps dbt seed --target ${{ matrix.warehouse }} --vars "{'my_package_schema': 'ci_${{ github.run_id }}'}" dbt run --target ${{ matrix.warehouse }} dbt test --target ${{ matrix.warehouse }}Or configure the profile to include the run ID in the schema:
schema: "dbt_ci_{{ env_var('GITHUB_RUN_ID', 'local') }}"This prevents parallel runs from stepping on each other’s data.
Cost Control
Running integration tests across three warehouses on every PR can accumulate costs. A few strategies to keep bills manageable:
- Use the smallest warehouse/slot configuration. Integration test seeds are tiny — you don’t need compute power.
- Only run the full matrix on PRs to main. Feature branch pushes can run a single adapter (e.g., Postgres) for fast feedback, with the full matrix as a merge gate.
- Clean up after runs. Drop the CI schema at the end of the workflow to avoid storage costs.
- name: Cleanup if: always() run: | cd integration_tests/ dbt run-operation drop_schema --args "{'schema': 'ci_${{ github.run_id }}'}" --target ${{ matrix.warehouse }}Beyond Pull Requests
CI on pull requests is the baseline. For mature packages, add:
- Nightly runs against latest dbt pre-releases. Catches compatibility issues before a dbt release goes GA, giving you time to fix things before users report problems.
- Release automation. When you tag a new version, automatically run the full matrix and only create the GitHub release if all tests pass. This prevents publishing a broken version.
- Changelog generation. Tools like
git-cliffcan auto-generate changelogs from conventional commit messages, reducing the manual effort of maintainingCHANGELOG.md.
on: push: tags: - 'v*'
jobs: release: runs-on: ubuntu-latest steps: # Run full test matrix first # Then create GitHub release only on success - uses: softprops/action-gh-release@v1 with: generate_release_notes: trueThe Hub’s hubcap script picks up new GitHub releases within an hour, so a passing release workflow means your update is live for users with minimal delay.