Your dbt tests pass, but something’s still wrong. Row counts dropped 40% overnight. A column that was never null suddenly has thousands of missing values. The data arrived three hours late. Native dbt tests catch what you tell them to catch. They don’t notice the unexpected.
Elementary fills this gap. It’s a dbt-native observability tool that adds anomaly detection, historical tracking, and alerting to your existing workflow. No separate platform to manage, no complex integrations. Your observability metadata lives in your warehouse, queryable like any other table.
This guide walks through installation and configuration, with a focus on BigQuery (and brief coverage of Snowflake and Databricks).
How Elementary Works
Elementary consists of two components that work together.
The dbt package installs like any other dbt package. It creates metadata tables in your warehouse and uses on-run-end hooks to capture artifacts after every dbt execution. Model run times, test results, and schema information all flow into tables you own.
The Elementary CLI (edr) is a Python tool that reads from those warehouse tables. It generates HTML observability reports, sends alerts to Slack or Teams, and runs anomaly detection tests. The CLI connects directly to your warehouse using its own profile.
Data flows like this:
dbt run/test → on-run-end hooks → INSERT into Elementary tables → CLI reads tables → reports/alertsThis architecture means your observability data is portable. You can query it directly, build custom dashboards, or migrate to a different tool without losing history.
Installing the dbt Package
Add Elementary to your packages.yml:
packages: - package: elementary-data/elementary version: 0.21.0Configure the schema in dbt_project.yml:
models: elementary: +schema: "elementary"If you’re running dbt 1.8 or later, you need two additional flags:
flags: require_explicit_package_overrides_for_builtin_materializations: False source_freshness_run_project_hooks: TrueThe Materialization Override (dbt 1.8+)
This is the step most setup guides gloss over, and it’s where many installations fail silently.
dbt 1.8 changed how package materializations work. Elementary needs to override the test materialization to capture results properly. Without this override, your tests run but Elementary’s tables stay empty.
Create macros/elementary_materialization.sql:
-- For BigQuery (and most other adapters){% materialization test, default %} {{ return(elementary.materialization_test_default()) }}{% endmaterialization %}If you’re on Snowflake, use the Snowflake-specific version instead:
{% materialization test, adapter='snowflake' %} {{ return(elementary.materialization_test_snowflake()) }}{% endmaterialization %}Running the Installation
dbt deps # Install the packagedbt run --select elementary # Create Elementary tablesdbt test # Run tests to populate resultsAfter this, check your warehouse. You should see an elementary schema (or whatever you configured) with tables like elementary_test_results, dbt_run_results, and dbt_models.
BigQuery Configuration
The CLI needs its own connection profile to read from your warehouse. This is separate from your dbt profile.
Generate a template:
dbt run-operation elementary.generate_elementary_cli_profileThis outputs YAML you’ll add to ~/.edr/profiles.yml. Here’s a complete BigQuery example:
elementary: outputs: default: type: bigquery method: oauth # or service-account project: your-project-id dataset: your_schema_elementary location: US # Required for Elementary threads: 4The Location Parameter
Elementary requires the location parameter for BigQuery connections. This catches people who copy their dbt profile directly. dbt treats location as optional and infers it, but Elementary’s CLI needs it explicit.
If you see errors about location or region, this is usually why.
Required Permissions
The service account or user running the CLI needs:
- BigQuery Data Viewer on the Elementary dataset
- BigQuery Metadata Viewer on your dbt datasets
- BigQuery Resource Viewer on your dbt datasets
- BigQuery Job User on the project
For a service account setup, the profile looks like:
elementary: outputs: default: type: bigquery method: service-account project: your-project-id dataset: your_schema_elementary keyfile: /path/to/service-account.json location: US threads: 4Snowflake and Databricks Setup
Snowflake
Snowflake supports password or keypair authentication:
elementary: outputs: default: type: snowflake account: your_account_id user: elementary_user role: elementary_role private_key_path: /path/to/private.key database: analytics warehouse: transforming schema: elementary threads: 4Grant the necessary permissions:
CREATE ROLE elementary_role;GRANT USAGE ON WAREHOUSE transforming TO ROLE elementary_role;GRANT USAGE ON DATABASE analytics TO ROLE elementary_role;GRANT USAGE ON SCHEMA analytics.elementary TO ROLE elementary_role;GRANT SELECT ON ALL TABLES IN SCHEMA analytics.elementary TO ROLE elementary_role;GRANT SELECT ON FUTURE TABLES IN SCHEMA analytics.elementary TO ROLE elementary_role;Databricks
For Unity Catalog, you must specify the catalog parameter for the three-level namespace:
elementary: outputs: default: type: databricks host: your-workspace.cloud.databricks.com http_path: /sql/1.0/warehouses/your-warehouse-id token: your-personal-access-token catalog: your_catalog schema: elementary threads: 4Use service principals for production deployments. Shared clusters can cause permission errors because the CLI attempts to write to package directories. Use single-user clusters or add the --update-dbt-package false flag.
CLI Installation and First Report
Install the CLI with your adapter:
pip install 'elementary-data[bigquery]'# orpip install 'elementary-data[snowflake]'# orpip install 'elementary-data[databricks]'Generate your first report:
edr reportThis creates an HTML file in ./edr_target/ by default. Open it in a browser to see:
- Test results with pass/fail/warn counts
- Model run history and durations
- Data lineage from your manifest
- Anomaly detection results (once you add those tests)
Useful flags for report generation:
| Flag | Purpose |
|---|---|
--days-back 7 | Limit to last 7 days of data |
--select last_invocation | Show only the most recent run |
--disable-samples | Skip data sampling (for PII concerns) |
Troubleshooting Common Issues
Empty Report or No Test Results
The elementary_test_results table is empty even after running tests.
Cause: The package isn’t capturing results, usually because the materialization override is missing (dbt 1.8+) or the package wasn’t installed correctly.
Fix:
- Verify the materialization macro exists in your project
- Run
dbt depsto reinstall packages - Run
dbt run --select elementary --full-refreshto recreate tables - Run
dbt testagain
”command not found: edr”
The CLI isn’t in your PATH.
Fix: Install in a virtual environment and activate it:
python3 -m venv venv_elementarysource venv_elementary/bin/activatepip install 'elementary-data[bigquery]'Or use python -m edr report to run without PATH modification.
BigQuery Location Errors
Errors mentioning location, region, or multi-region.
Fix: Add the location parameter to your Elementary profile. Use US, EU, or your specific region.
Tables Created as Views
Elementary tables show up as views instead of incremental tables, causing poor performance.
Cause: A conflicting materialization config in your project is overriding Elementary’s settings.
Fix: Check for +materialized: view configs that might apply to the Elementary schema. Run dbt run --select elementary --full-refresh after fixing.
Databricks Permission Errors on Shared Clusters
The CLI fails with permission errors when run against a shared cluster.
Fix: Use a single-user cluster, or add --update-dbt-package false to CLI commands.
What’s Next
With Elementary installed, you have the foundation for dbt-native observability. The metadata tables are populating, and you can generate reports showing test results and model performance.
The real value comes from what you build on top of this:
- Anomaly detection tests that catch volume drops, freshness issues, and distribution shifts without manual thresholds
- Alerting to Slack or Teams when tests fail
- Custom dashboards built by querying Elementary tables directly from your BI tool
Elementary’s approach (storing everything in your warehouse as dbt models) means you’re not locked in. The data is yours to query, visualize, and extend however you need.
Check the Elementary documentation for anomaly detection configuration and alerting setup. If you’re evaluating observability tools more broadly, the decision often comes down to whether you want a dbt-native solution you configure in YAML or a standalone platform with more automated ML-based detection.