ServicesAboutNotesContact Get in touch →
EN FR
Note

BI Tool Self-Service Models

Three different approaches to self-service BI: governed exploration (Lightdash), visual query builder (Metabase), and LookML-powered Explore (Looker). How to match the model to your users.

Planted
dbtanalyticsdata modeling

BI tools differ fundamentally in who defines what users can explore and how much latitude users have to create their own calculations. Three models predominate: governed exploration, visual query builder, and LookML-powered Explore. Each reflects a different trade-off between metric consistency and user flexibility.

Three Models

Governed Exploration (Lightdash)

Lightdash’s self-service model puts analytics engineers in control of what’s explorable. Business users open the Explore view and pick from a menu of pre-defined metrics and dimensions — but everything in that menu was curated by the analytics team in dbt YAML. You can’t create a calculation that isn’t already defined in the schema:

# What business users can explore in Lightdash
# is exactly what analytics engineers define here
models:
- name: orders
meta:
label: "Orders"
columns:
- name: order_id
meta:
dimension:
type: string
label: "Order ID"
- name: amount
meta:
metrics:
total_revenue:
type: sum
label: "Total Revenue"
description: "Sum of completed order amounts, excluding refunds"

The constraint is the feature. A business user exploring revenue by region by week can’t accidentally define “revenue” in a way that conflicts with the finance team’s definition, because the only “revenue” available is the one analytics engineers defined and reviewed in a pull request.

The Spotlight feature (shipped 2025) extends this with a metrics-first view: period-over-period comparisons, goal tracking, and trend analysis — all without any SQL knowledge required. The analytics team defines the metrics; Spotlight gives business users a way to interact with them that requires no technical vocabulary at all.

Who this works for: Teams where analytics engineers actively maintain the dbt project and where metric consistency is a genuine priority. The analytics team takes on curation responsibility — adding new dimensions, creating new metrics, updating YAML when business logic changes. In exchange, business users get guaranteed-consistent self-service.

Where it breaks down: If the analytics team is slow to add new metrics or dimensions, business users hit walls constantly. “I can’t explore X because it’s not in Lightdash yet” is a common complaint in teams where the curation cadence doesn’t match the business’s need to explore. The governed model requires an active, responsive analytics engineering function to work well.

Visual Query Builder (Metabase)

Metabase’s model is the opposite: minimize gatekeeping, maximize accessibility. The visual query builder lets any user create questions by selecting a table, filtering rows, and picking columns to show — no SQL, no concept of dimensions or measures, no upfront curation required.

The result is genuine democratization. A marketing manager can build their own campaign performance dashboard in 20 minutes. A product manager can slice signup data by cohort without filing a request. Metabase’s 60,000+ company deployments include many specifically because this non-technical self-service actually happens in practice, not just in demos.

The trade-off is metric consistency. Each question contains its own calculation logic. Two analysts can create “total revenue” questions that produce different numbers on different dashboards, and nothing in the tool prevents it. There’s no centrally defined revenue that everyone references — there are individual query fragments that each analyst wrote.

Metabase’s community dbt-metabase plugin mitigates this partially. It syncs column descriptions and semantic types from dbt into Metabase, so users see business-friendly labels and understand which columns are IDs versus amounts versus text. But it doesn’t provide metric governance. Descriptions are documentation; they’re not enforcement.

Who this works for: Teams where the primary users are non-technical — marketing, operations, product — and where the volume of ad-hoc questions would overwhelm a curation-based model. Also fits well when there’s no dbt project (Metabase works standalone with any data source), or when speed to value matters more than metric precision.

Where it breaks down: At scale, the lack of governed metrics creates a trust problem. The CFO presents a revenue number; the VP of Sales challenges it with a different number from a different dashboard. Both came from Metabase. Both look authoritative. Neither is definitively wrong — they just use different filters — and nobody can easily tell which one matches the finance team’s definition. Organizations that hit this problem either buy into proper metric governance (moving toward a tool with a semantic layer) or accept metric inconsistency as the cost of democratization.

LookML-Powered Explore (Looker)

Looker’s Explore is the most powerful of the three models — once LookML is set up correctly. LookML defines the full semantic graph: dimensions, measures, fanout-safe aggregations, derived tables, access grants, and dimension hierarchies. A properly modeled LookML environment can handle multi-join analysis that breaks simpler tools, and the Explore interface lets users slice and pivot across that model with no SQL.

The catch is the upfront investment. LookML has a 6+ week learning curve for new developers, and a production-grade LookML project for a mid-sized data warehouse is a serious engineering effort. Gartner estimates organizations spend 40-60% of total Looker investment on LookML development and maintenance. That cost is front-loaded: high initial investment, but lower ongoing cost per new user once the modeling is done well.

Looker’s Explore is uniquely capable at symmetric aggregates — calculations that remain correct across fan-out joins where a naive SUM would double-count. If your data model has one-to-many relationships between tables (orders to line items, users to sessions), Looker’s aggregate awareness handles this correctly where simpler tools produce wrong answers silently. For complex multi-table analysis at enterprise scale, this matters.

Who this works for: Organizations with dedicated LookML developers, complex data models with multi-table joins, and enterprise governance requirements. Looker’s REST API 4.0 with SDKs in Python, Ruby, TypeScript, and Go also makes it the preferred choice for teams building custom integrations or embedded analytics products where the data model needs to be queryable programmatically.

Where it breaks down: High upfront cost and a slow path to value. Small teams without dedicated LookML developers find themselves with a powerful tool they can’t fully utilize. The self-service that Looker promises only materializes after the LookML investment is made and maintained — which means self-service depends on the analytics team staying current with LookML development.

Matching Model to Users

The right self-service model follows from who your users are, not which feature list is longest.

User typeBest modelTool
Non-technical (marketing, ops, product)Visual query builderMetabase
Analytics-adjacent (analysts, PMs with SQL)Governed explorationLightdash
Power users + complex joinsLookML ExploreLooker
Mixed audiences at enterprise scaleLookML Explore + self-service layerLooker or Omni

Metabase is often criticized for metric inconsistency by teams that needed governed exploration. Lightdash is criticized for inflexibility by teams that needed an unstructured visual builder. Looker is criticized for complexity by teams that couldn’t fund LookML development. Each tool is designed for specific user types.

The Hybrid Approach

Many organizations end up running two tools in parallel: one for governed, consistent metrics (where the semantic layer enforces correctness), and one for unstructured exploration (where speed and flexibility matter more than precision).

A common pattern: Lightdash or Looker for the metrics that appear in board decks and financial reporting, Metabase for the long tail of ad-hoc questions that analysts field daily. The governed tool gets curated slowly and carefully. The free-exploration tool gets used constantly without governance overhead.

This isn’t necessarily a failure of consolidation — it’s an acknowledgment that different workflows need different tools. The BI Tool Selection Framework covers how to decide which tool serves which use case in your specific context.

Identifying which self-service model fits the user base is a prerequisite to tool selection.