The Dagster web UI is launched via dagster dev on localhost:3000 for local development, or hosted on Dagster+ for production. It translates the asset-centric model — asset health, freshness, lineage — into visual, clickable surfaces. This note covers the core UI sections and what they mean for dbt + BigQuery workflows.
Asset Catalog
A searchable, filterable list of every asset in your project. This is the entry point for most day-to-day operations.
What you see for each asset:
- Materialization history (when it last ran, how long it took)
- Metadata (description, owners, tags, group)
- Health status (materialized, stale, failed, fresh)
- Asset check results (passed, failed, warning)
- Upstream and downstream dependencies
How analytics engineers use it:
- Filter by
group_nameto see all finance models, all marketing models, etc. - Filter by
ownersto see assets owned by your team — useful for on-call scenarios - Filter by tags to see all
daily-tagged models or allcritical-priority assets - Check whether a specific model’s data is current before sharing a dashboard
For teams with dbt models mapped to Dagster assets, every model from your manifest.json appears here with its dbt metadata (tags, owners from meta.dagster.owners, descriptions from schema.yml). Your existing dbt documentation doubles as Dagster catalog metadata.
Global Asset Lineage
The full dependency graph across all assets — not just dbt models, but Python assets, external sources, and any other Dagster-managed data. This is the graph view that makes cross-system dependencies visible.
Key capabilities:
- Overlay facets. Switch between views: owners, health status, automation conditions, computation type. “Which assets owned by the finance team are currently stale?” is one filter away.
- Cross-system visibility. The graph shows Fivetran-synced tables alongside dbt models alongside Python assets. A full-stack pipeline is visible end to end.
- Group collapse. Large projects can collapse asset groups to keep the graph manageable. Expand the
financegroup to see its internal dependencies; collapsemarketingto treat it as a single node. - Selective materialization. Click on an asset, select “Materialize,” and optionally include its downstreams. This is the “fix one model and cascade the refresh” pattern that dbt Cloud supports but simpler orchestrators don’t.
The lineage view is particularly powerful when debugging data quality issues. If a downstream dashboard shows unexpected numbers, you trace the lineage from the mart model back through intermediates to base models to raw sources, checking health badges at each level. The issue becomes visible as a red or stale badge on the specific asset where the problem originated.
Run Details
When you click into a specific run (a materialization, a scheduled job, or a manual execution), the Run Details page shows:
- Gantt chart. Execution timing for every asset in the run. You can see parallelism and identify bottlenecks. If
int__google_ads__daily_spendtakes 12 minutes while everything else takes 30 seconds, the Gantt chart makes that obvious. - Structured event logs. Every event — materialization start, completion, check result, metadata emission — in chronological order with structured data. Richer than plain text logs because events are typed and filterable.
- Compute logs. The raw stdout/stderr from each asset’s execution. For dbt assets, this is the
dbt buildoutput with model-level compilation and execution details.
Failed run handling is where the UI differentiates from simpler tools. A failed run shows exactly which asset failed and why. One-click re-execution of just the failed assets and their downstreams avoids re-running the entire pipeline. Compare this to Cloud Run Jobs, where a failed dbt build means either re-running everything or manually constructing a dbt run --select command.
Health Indicators
Color-coded badges on every asset provide at-a-glance health status:
| Badge | Meaning |
|---|---|
| Green (Materialized) | Asset has been materialized and all checks passed |
| Yellow (Stale) | Asset’s upstream dependencies have changed since last materialization |
| Red (Failed) | Last materialization or check failed |
| Blue (Fresh) | Asset meets its [[Dagster Freshness Policies and Scheduling |
| Gray (Never materialized) | Asset exists in code but has never been materialized |
The health system is the practical manifestation of asset-centric orchestration. Instead of checking logs to determine whether your data is current, you look at badge colors. A dashboard with all-green badges means your data is current and passes quality checks. A yellow badge on a mart model means upstream data changed and the mart needs refreshing.
For analytics engineers used to dbt Cloud’s interface, the health badges are the closest equivalent to dbt Cloud’s “last run status” — but richer, because they track freshness and data state, not just execution success.
Dagster+ Pro Features
The managed Dagster+ offering (Pro tier) adds features relevant to analytics engineering teams on BigQuery:
BigQuery cost tracking per asset. Answers “which models cost the most to run?” by tracking per-asset BigQuery slot usage and bytes processed. For teams doing cost-aware scheduling on BigQuery, this is the data you need to identify expensive models and optimize them.
Column-level lineage. Goes beyond table-level dependencies to show which columns flow through which transformations. If mrt__finance__revenue.total_amount seems wrong, column-level lineage shows exactly which upstream columns contribute to it and through which transformations.
Catalog mode. A simplified UI designed for less-technical stakeholders who need visibility without the full engineering interface. Product managers, analysts, and data consumers can browse the asset catalog, check freshness, and see data descriptions without encountering Python code or orchestration details. This addresses the “observability for non-engineers” gap that many data platforms struggle with.
Alerts. Configure notifications to Slack, PagerDuty, or email when assets fail, go stale, or violate freshness policies. The alerting is asset-level, not pipeline-level — you get notified about the specific asset that needs attention, not just “the 6 AM job failed.”
Local Development vs. Production
During local development, dagster dev launches the full UI on localhost:3000 with your project’s assets. This is useful for:
- Verifying that new dbt models appear correctly in the asset graph
- Testing materializations before deploying to production
- Debugging failed runs with access to compute logs
- Checking that asset checks from dbt tests are configured correctly
The local UI is functionally identical to the production UI. What you see locally is what you’ll see in Dagster+ after deployment, minus the Pro-tier features. This makes the development feedback loop tight: write code, run dagster dev, check the UI, iterate.