Claude Code defaults to Sonnet, which handles most analytics engineering work well. But you can switch models mid-session, and knowing when to escalate to Opus makes a real difference in output quality for certain tasks.
Switching Models
The /model command shows available models and lets you switch:
/modelYou can also launch Claude Code with a specific model:
claude --model opusThe switch takes effect immediately. Your conversation history carries over — you don’t lose context when changing models.
Sonnet: The Daily Driver
Sonnet is faster, cheaper (on API billing), and uses less of your subscription quota. For the majority of analytics engineering tasks, it produces output that’s indistinguishable from Opus:
- Base model generation — Pattern replication from existing models is well within Sonnet’s capabilities. It reads your conventions, applies them consistently, generates the YAML. See Base Model Generation with Claude Code.
- Test writing — Schema tests, uniqueness constraints, not-null checks. The pattern density is high and the conceptual complexity is low.
- Documentation — Column descriptions, model descriptions, docs blocks. Sonnet generates solid drafts that need minor editing.
- Simple debugging — Build errors with clear error messages, missing column references, broken
ref()calls. - Codebase exploration — “Explain this project structure” or “trace the lineage from source to mart” are reading-and-summarizing tasks that Sonnet handles cleanly.
If you’re getting started with Claude Code, stay on Sonnet. Don’t switch to Opus until you’ve seen a case where Sonnet’s output falls short.
Opus: The Reasoning Upgrade
Opus provides substantially more reasoning depth. The difference shows up in tasks that require holding multiple constraints simultaneously, tracing complex logic across many files, or producing output that requires genuine planning rather than pattern application.
Switch to Opus for:
- Complex incremental model debugging — When an incremental model produces wrong results and the issue involves the interaction between the incremental predicate, the unique key, and the merge strategy, Opus traces through the logic more reliably. Sonnet sometimes misses edge cases in multi-step reasoning about how
is_incremental()interacts withunique_keyandmerge_update_columns. - Refactoring nested macro logic — Jinja macros that call other macros, with conditional logic and
varargs, require the model to hold the full call chain in memory while reasoning about the refactor. Opus maintains this context more accurately. - Multi-file architectural decisions — When you ask Claude to restructure a group of models (splitting a complex mart into intermediate layers, redesigning a join strategy across a DAG section), Opus produces more coherent plans because it reasons about the dependencies more thoroughly.
- Subtle data quality investigations — “5% of sessions have NULL attribution and we don’t know why” requires the model to form hypotheses, trace data through multiple transformations, and identify which step introduces the problem. Opus’s additional reasoning depth matters here.
The Cost-Speed Tradeoff
On API billing, Opus costs roughly 5x more per token than Sonnet and is noticeably slower. A base model generation that takes 30 seconds on Sonnet might take 90 seconds on Opus, with identical output quality.
On a subscription plan, the tradeoff is quota usage rather than direct cost. Opus consumes more of your usage allowance per interaction.
The practical rule: default to Sonnet, escalate to Opus when Sonnet’s output isn’t good enough. Don’t preemptively use Opus because a task “seems hard.” Try it on Sonnet first. If the output misses constraints, loses track of dependencies, or doesn’t hold the full problem in context, switch to Opus and re-prompt.
This is different from choosing between models in the API for a one-shot task. In Claude Code, you can try Sonnet, evaluate the result in seconds, and escalate to Opus in the same session. The cost of trying the cheaper model first is nearly zero.
Model Selection for Specific Workflows
| Task | Recommended Model | Why |
|---|---|---|
| Base model generation | Sonnet | Pure pattern replication |
| Schema test writing | Sonnet | High pattern density |
| Column documentation | Sonnet | Reading + summarizing |
| Simple build error debugging | Sonnet | Clear error → clear fix |
| Complex data issue investigation | Opus | Multi-step reasoning required |
| Incremental model design | Opus | Multiple interacting constraints |
| Macro refactoring | Opus | Nested logic, call chain tracking |
| Cross-project refactoring plan | Opus | Architectural reasoning |
| Prompt iteration | Sonnet first | Escalate if output quality insufficient |
A Practical Pattern
Some analytics engineers develop a two-pass workflow for complex tasks:
- Sonnet pass: Generate the initial implementation. Fast, cheap, gets the structure right.
- Opus review: Switch to Opus and ask it to review the Sonnet-generated code. “Review this incremental model for edge cases in the merge strategy.” Opus catches issues that Sonnet introduced.
This combines Sonnet’s speed for generation with Opus’s depth for review. The total cost is lower than doing everything on Opus, and the output quality is often higher because the review step is explicitly framed as finding problems rather than generating from scratch.