Building Data Quality Dashboards with Elementary

Running dbt tests catches problems. But passing tests today tells you nothing about whether your data quality is improving, degrading, or stable over time. Without historical visibility, you’re flying blind between failures.

Elementary solves this by storing every test result in your warehouse. The raw data is there. We will see how to turn it into dashboards your team will actually use: self-contained HTML reports for quick reviews, hosted versions for team access, and custom BI dashboards for the KPIs that matter to your organization.

Prerequisites

This guide assumes you’ve already installed Elementary. If you haven’t, check the complete setup guide first.

Quick verification that everything is working:

Terminal window
# Confirm the dbt package is installed
dbt deps
dbt run --select elementary
# Confirm the CLI is available
edr --version

You should see Elementary tables in your warehouse (like elementary_test_results, dbt_run_results, and dbt_models) and the edr command should return a version number.

Generating your first report

The edr report command creates a self-contained HTML file with everything you need to assess your data quality at a glance:

Terminal window
edr report

By default, this creates elementary_report.html in the ./edr_target directory. Open it in a browser, and you’re looking at your data quality dashboard.

Key flags you’ll use regularly:

FlagWhat it does
--file-path report.htmlCustom output location
--target-path ./reportsChange the output directory
--days-back 7Limit to last 7 days of data
--select last_invocationShow only the most recent run
--project-name "Analytics"Custom name in the report header
--disable-samplesHide data samples (useful for PII)

For a focused view of your latest run:

Terminal window
edr report --select last_invocation --file-path ./reports/latest.html

For a weekly summary:

Terminal window
edr report --days-back 7 --file-path ./reports/weekly.html

Understanding the report sections

The HTML report packs a lot of information into a single file. Here’s what you’ll find.

Data health review

The homepage shows your overall data health status: total test count, pass/fail/warn breakdown, and test failures plotted over time. This is your starting point for understanding whether things are getting better or worse.

The anomaly detection summary highlights statistical outliers Elementary caught automatically. Model run status gives you a quick view of execution health across your project.

Test results

Each test gets its own detailed view with execution history, the test description, and configuration. When tests fail, you see:

  • The compiled SQL so you can debug directly
  • Failed row samples (unless disabled for PII)
  • Historical pass/fail pattern for that specific test

The time-series view shows whether failures are one-off events or recurring patterns.

Model runtime tracking

Execution duration charts reveal performance trends over time. You’ll spot models that are gradually getting slower, identify bottlenecks, and see metrics like bytes billed and rows affected. This section is particularly useful for cost optimization work.

Anomaly detection

For Elementary anomaly tests, you get time-series charts showing actual metric values plotted against expected ranges. The visualization distinguishes between the training period (where baselines are established) and the detection period (where anomalies are flagged). Column-level tracking displays each monitored metric.

Lineage

The lineage view shows end-to-end data flow from your manifest, with test coverage overlaid. When a test fails, you can trace impact downstream and understand what’s at risk.

Hosting your dashboard

An HTML file on your laptop doesn’t help your team. Hosting the report makes it accessible to everyone who needs visibility into data quality.

AWS S3

Terminal window
edr send-report \
--aws-access-key-id $AWS_ACCESS_KEY_ID \
--aws-secret-access-key $AWS_SECRET_ACCESS_KEY \
--s3-bucket-name your-reports-bucket \
--bucket-file-path reports/elementary.html \
--update-bucket-website true

Access at: http://your-reports-bucket.s3-website-us-east-1.amazonaws.com/index.html

Google Cloud Storage

Terminal window
edr send-report \
--google-service-account-path /path/to/service-account.json \
--gcs-bucket-name your-reports-bucket \
--update-bucket-website true

Access at: https://storage.googleapis.com/your-reports-bucket/index.html

Azure Blob Storage

Terminal window
edr send-report \
--azure-container-name your-container \
--azure-connection-string $AZURE_CONNECTION_STRING \
--update-bucket-website true

Access at: https://your-account.blob.core.windows.net/your-container/index.html

Automating report generation

Run edr send-report after each dbt build in your orchestration tool. In Airflow, add it as a downstream task. In GitHub Actions:

- name: Generate Elementary report
run: |
edr send-report \
--gcs-bucket-name ${{ secrets.REPORTS_BUCKET }} \
--google-service-account-path ./sa.json

The report updates automatically, and your team always sees the latest data quality status. For real-time notifications when tests fail, see setting up Elementary alerts.

Building custom dashboards in BI tools

The HTML report covers most needs, but at some point you’ll want more control. Maybe you need executive-level summaries with your own branding, or a unified view across multiple dbt projects. Maybe your team already has BI dashboards and data quality should live alongside operational metrics rather than in a separate tool.

Since Elementary stores everything in queryable tables, you can connect any BI tool that reads from your warehouse: Looker, Tableau, Metabase, Power BI, whatever your team already uses.

Key tables

TableContents
elementary_test_resultsAll test execution results with status, timing, and metadata
dbt_run_resultsModel run history and execution timing
dbt_modelsModel metadata from your manifest

Example queries

Daily pass/fail trend

SELECT
DATE(detected_at) AS date,
COUNT(CASE WHEN status = 'pass' THEN 1 END) AS passed,
COUNT(CASE WHEN status = 'fail' THEN 1 END) AS failed,
ROUND(
COUNT(CASE WHEN status = 'pass' THEN 1 END) * 100.0 / COUNT(*),
2
) AS pass_rate
FROM elementary_test_results
WHERE detected_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)
GROUP BY 1
ORDER BY 1;

Slowest models this week

SELECT
model_name,
AVG(execution_time) AS avg_seconds,
MAX(execution_time) AS max_seconds,
COUNT(*) AS runs
FROM model_run_results
WHERE created_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY)
GROUP BY 1
ORDER BY avg_seconds DESC
LIMIT 20;

Tests by status and tag

SELECT
tags,
status,
COUNT(*) AS count
FROM elementary_test_results
WHERE detected_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY)
GROUP BY 1, 2
ORDER BY 1, 2;

Designing effective data quality KPIs

Raw test results need interpretation. These KPIs translate test data into metrics stakeholders understand.

Test pass rate

The most fundamental metric: what percentage of tests are passing?

pass_rate = passed_tests / total_tests * 100

Track this daily. A declining trend signals growing problems, even if nothing is actively failing. A 100% pass rate for weeks might mean your tests aren’t strict enough.

Data freshness SLA

What percentage of your data sources meet their freshness requirements?

This requires defining SLAs first. Tag tables with their expected update frequency, then measure:

SELECT
COUNT(CASE WHEN status = 'pass' THEN 1 END) * 100.0 / COUNT(*) AS sla_compliance
FROM elementary_test_results
WHERE test_type = 'freshness_anomalies'
AND detected_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY);

Model success rate

What percentage of model runs complete successfully?

success_rate = successful_runs / total_runs * 100

Track separately from test pass rate. A model can run successfully but fail tests, or fail to run at all.

Anomaly detection rate

How many anomalies is Elementary catching per day or per week? This metric is more useful as a trend than as an absolute number. A sudden spike often points to upstream data issues rather than a problem with your models. On the other hand, a flat zero over weeks might mean your anomaly sensitivity is too low, or you’re not monitoring the right columns. Either extreme is worth investigating.

Test coverage

What percentage of your models have at least one test?

SELECT
COUNT(DISTINCT CASE WHEN has_tests THEN model_name END) * 100.0 /
COUNT(DISTINCT model_name) AS coverage
FROM (
SELECT
m.name AS model_name,
EXISTS (
SELECT 1 FROM elementary_test_results t
WHERE t.model_unique_id = m.unique_id
) AS has_tests
FROM dbt_models m
WHERE m.resource_type = 'model'
);

Coverage tells you where you have blind spots. A model with zero tests is a model where problems go undetected.

Mapping to data quality dimensions

These KPIs map to standard data quality dimensions, which helps when you need to report on quality in terms stakeholders recognize:

DimensionElementary TestsKPI
Completenessnot_null, null percentage anomaliesNull rate trends
ConsistencyReferential integrity, relationshipsCross-table validation pass rate
Timelinessfreshness_anomalies, event_freshness_anomaliesSLA compliance
Uniquenessunique, duplicate detectionDuplicate rate
Volumevolume_anomaliesRow count variance from baseline

Best practices

Organize by domain

Structure dashboards around how your team thinks about data, not how Elementary organizes it internally. Group tests by domain (marketing, finance, product), by criticality (tier-1 for revenue-impacting, tier-2 for operational, tier-3 for exploratory), or by SLA (hourly, daily, weekly). The right grouping depends on who’s looking at the dashboard. Engineers care about criticality tiers. Business stakeholders care about their domain.

Tag consistently

Tags are what make all of this filtering possible, so it’s worth being deliberate about them from the start. Apply them in your test definitions:

tests:
- elementary.volume_anomalies:
tags: ['critical', 'finance', 'daily-check']
meta:
owner: 'analytics-team'

Then filter reports:

Terminal window
edr report --select tag:critical

Or build BI dashboards filtered by tag.

Set refresh frequency

Match report cadence to how often your data actually updates. Production reports should generate after each dbt run so failures surface immediately. Executive dashboards work better as daily rollups since hourly noise distracts more than it helps. Development reports are on-demand: generate them when you need them, skip them when you don’t.

Manage historical data

Elementary’s incremental models retain history indefinitely by default, which is what you want for long-term trend analysis. But reports that try to render months of data get slow and hard to read. Use --days-back to limit report scope without losing the underlying history:

Terminal window
# Quick daily report (7 days)
edr report --days-back 7 --file-path ./reports/daily.html
# Monthly trend report (30 days)
edr report --days-back 30 --file-path ./reports/monthly.html

The full history stays in your warehouse tables, available for BI dashboards and ad hoc queries whenever you need it.


The gap between “we run dbt tests” and “we have data quality visibility” is mostly about making existing information accessible. Elementary already collects the data. The HTML report gets you from zero to a working dashboard in a single command. Hosting it gives your team access. Building custom KPIs in your BI tool connects data quality to the metrics your organization already tracks.

Start with the generated report. If that’s enough for your team, stop there. When you need more, the underlying tables are waiting in your warehouse, ready for whatever custom dashboards make sense for your situation.

If you’re still setting up Elementary, the installation and configuration guide covers everything from package installation to anomaly detection configuration.