Elementary stores every test result in the warehouse as queryable tables — the same tables the HTML report reads from. Any BI tool that can connect to the warehouse can query them directly.
Reasons to build custom dashboards rather than using the HTML report: executive-level views are easier to embed in existing BI tools alongside operational metrics; teams with multiple dbt projects may need a unified view; organizations may want custom thresholds or domain-specific definitions of acceptable data quality.
The three tables
Everything you need lives in three tables:
| Table | Contents |
|---|---|
elementary_test_results | All test execution results with status, timing, and metadata |
dbt_run_results | Model run history and execution timing |
dbt_models | Model metadata from your manifest |
Elementary creates these in the schema you configured during setup (typically elementary or your_project_elementary). They’re incremental models, so they grow over time and retain full history unless you truncate them manually.
Useful queries
Daily pass/fail trend
The most fundamental data quality metric: what percentage of tests are passing, tracked daily.
SELECT DATE(detected_at) AS date, COUNT(CASE WHEN status = 'pass' THEN 1 END) AS passed, COUNT(CASE WHEN status = 'fail' THEN 1 END) AS failed, ROUND( COUNT(CASE WHEN status = 'pass' THEN 1 END) * 100.0 / COUNT(*), 2 ) AS pass_rateFROM elementary_test_resultsWHERE detected_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)GROUP BY 1ORDER BY 1;Plot this as a line chart in your BI tool. A declining pass rate is worth investigating even if no individual test has triggered a critical alert.
Slowest models this week
SELECT model_name, AVG(execution_time) AS avg_seconds, MAX(execution_time) AS max_seconds, COUNT(*) AS runsFROM model_run_resultsWHERE created_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY)GROUP BY 1ORDER BY avg_seconds DESCLIMIT 20;This query is most useful as a bar chart sorted by avg_seconds. Models that appear in the top 10 week-over-week are candidates for optimization work.
Tests by status and tag
SELECT tags, status, COUNT(*) AS countFROM elementary_test_resultsWHERE detected_at >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY)GROUP BY 1, 2ORDER BY 1, 2;If you’ve tagged your tests with domain tags (finance, marketing, product) or criticality tiers (critical, tier-2), this query becomes a domain-by-domain breakdown of test health. Without consistent tagging, it’s less useful.
Building on top of these tables
The tables are richer than the examples above suggest. elementary_test_results includes the test type, model name, column name, test description, and the compiled SQL for each result. dbt_run_results includes bytes billed and rows affected in addition to execution time. dbt_models includes node metadata from your manifest, which means owner information, tags, and any meta fields you’ve set propagate through to queryable columns.
This means you can build dashboards filtered by owner, sorted by cost impact, or scoped to a specific domain — all through standard SQL queries against tables you control. The data is already there from Elementary’s normal operation; building the dashboard is just a matter of writing the queries.
One practical note: the table and column names can vary slightly depending on your Elementary version and warehouse adapter. Check the actual schema in your warehouse before building dashboards, especially for newer fields like bytes billed that may not exist in older versions.