Effective prompting for dbt work isn’t about magic phrases or elaborate system prompts. It’s about a single principle: eliminate assumptions. Every decision Claude has to make without guidance is a decision that might not match your intent. Good prompts don’t give Claude room to guess.
The flip side: learning effective prompts takes several weeks of real use. You need to hit the failures — the generic model that doesn’t match your conventions, the refactor that missed half the references, the test that checks the wrong thing — before you fully internalize what information Claude actually needs.
The Four Properties of Effective Prompts
1. Specificity About Outcomes
Don’t describe a process. Describe a result.
Weak: “Create an intermediate model that joins orders to customers.”
Strong: “Create int_orders__enriched that joins base__shopify__orders to base__shopify__customers on customer__id. Output grain: one row per order. Include all order columns plus customer__email, customer__country, and customer__first_order_date. Materialize as table. Primary key test on order__id.”
The second prompt specifies: the exact model name, the source models, the join key, the output grain, which columns to include, the materialization, and the test. Nothing is left to assumption.
2. Explicit Constraints and Requirements
State what you don’t want as clearly as what you do want. Constraints are often more valuable than positive instructions.
Create a base model for source_shopify.orders.
Constraints:- Deduplicate on order__id using ROW_NUMBER() QUALIFY, not a WHERE subquery- Include _loaded_at and _source_table metadata columns- Do NOT add any business logic (no order categorization, no revenue tiers)- Do NOT use SELECT *- Cast financial amounts as FLOAT64The prohibition list prevents Claude from doing things that seem reasonable in the abstract but don’t match your project conventions. “Do NOT add any business logic” matters for base models because Claude sometimes adds CASE WHEN categorization to be helpful. You don’t want that in base.
3. References to Existing Codebase Patterns
“Follow the pattern in models/base/base_shopify__orders.sql” is more powerful than any amount of explaining what a good base model looks like. Claude reads the actual file and extracts the pattern — CTE structure, naming conventions, config block, test style — directly from your code.
This is why pattern replication is one of Claude Code’s strongest capabilities with dbt. The “one good example” principle: get one model right, reference it in subsequent prompts, let Claude replicate the pattern.
Create a singular test for mrt__product_performance following the patternin tests/assert_revenue_never_negative.sqlCreate a macro for generating surrogate keys following the style inmacros/utils/generate_surrogate_key.sqlThe reference model does more instructional work than any description could.
4. Clear Definition of “Good” Results
Tell Claude what the output should be verifiable against. Not just “create a model” but “create a model that passes dbt build --select model_name+.”
Create an intermediate model int_customers__aggregated that:- Takes base__shopify__orders as input- Aggregates to one row per customer- Includes: customer__id, total_orders, total_revenue_usd, first_order_date, last_order_date- Runs successfully with dbt build --select int_customers__aggregated
Run dbt build to verify before finishing.When Claude knows the success criterion is a green build, it closes the loop itself — running the command, seeing the error if there is one, fixing it, and running again. You get working code, not just syntactically plausible code.
The Session Memory Problem
Claude Code has no memory between sessions. This one catches people who’ve had a good first session and assume the context persists.
This doesn’t work:
Like we discussed yesterday, update the revenue model to use the newattribution logicClaude has no idea what you discussed yesterday. It doesn’t know which revenue model, what attribution logic, or what “new” means. This prompt produces either a confused response or a confident but wrong one.
Every prompt has to be self-contained. Specify the model name, state the required changes, provide the context:
Update mrt__marketing__attributed_revenue to use last-touch attributioninstead of first-touch. Currently the model attributes all revenue to thefirst session source. Change it to attribute to the last session sourcebefore conversion. The relevant column is customer__attribution_source inint__sessions__attributed.Same request, completely different outcome.
CLAUDE.md as Project Memory solves some of this for persistent conventions — Claude reads the file at the start of every session and carries the project context. But CLAUDE.md handles conventions, not conversation history. The specific “what changed yesterday” context always has to be re-stated.
Anatomy of a Strong Prompt
For dbt model generation, a complete prompt has these components:
[Action] [target] [from sources] [following pattern].[Grain]. [Column list]. [Constraints].[Test requirements].[Verification step].Applied:
Create int_orders__pivoted from base__shopify__orders. Follow the patternin int_customers__aggregated.
Grain: one row per order.Pivot order_status into boolean columns: is_pending, is_completed, is_cancelled.
Constraints:- Materialize as table- Use consistent CTE naming: source, pivoted, final- No SELECT *
Tests: unique and not_null on order__id, not_null on all boolean columns.
Run dbt build --select int_orders__pivoted to verify.This prompt is longer than “make me an orders model” but the output requires no corrections. The time savings from fewer revision rounds more than offsets the extra prompt-writing time.
When to Write a Slash Command Instead
If you find yourself writing the same detailed prompt more than 2-3 times, that’s a signal it should be a slash command. The constraints, verification steps, and standards get encoded once in .claude/commands/ and become repeatable.
/generate-base-model source_stripe.charges…replaces a 200-word prompt. The team inherits the same quality standards without learning what to include in every prompt. Slash commands are how good prompting practices scale beyond individual usage.
What Doesn’t Work
Generic intent:
“Make a model that aggregates orders”
Claude will produce something, but the grain, the column selection, the aggregation logic, and the output format will all be assumptions. Reviewing and correcting generic output takes longer than writing a specific prompt.
Reference to implicit context:
“Update the model we were fixing last week”
No memory, no context, no usable output.
Undefined “good”:
“Improve the documentation in models/marts/”
Improve how? More detail? Different style? Add missing columns? Cover which models? This produces arbitrary changes that may not align with your standards.
Leaving decisions undefined:
“Add tests to the revenue model”
Which tests? Schema tests or singular tests? On which columns? At what severity? A prompt like this gets you tests, but not necessarily the tests that matter for that model’s business logic.
The Underlying Principle
Claude Code handles implementation, not conceptualization. The thinking about what a model should do, what grain it should be at, what business logic it needs — that stays human. Claude converts your specification into files.
When a prompt fails, the root cause is almost always that the specification wasn’t complete. The fix is more specificity, not a different phrasing. Add the missing constraint, name the pattern to follow, define what “done” means. The output improves proportionally to how well the prompt eliminates assumptions.