Slash commands are reusable, team-shareable prompts that turn multi-step Claude Code workflows into single invocations. Instead of writing a detailed prompt every time you want to generate tests or document a model, you type /generate-tests mrt__sales__orders and Claude executes a predefined workflow.
The commands live as Markdown files in .claude/commands/ at your project root. Since they’re plain files in your repo, committing them to git means everyone on your team gets the same workflows, the same guardrails, the same quality standards.
Anatomy of a Slash Command
A slash command has three parts: YAML front matter, an optional argument placeholder, and the prompt body.
---description: Generate comprehensive dbt tests for a modelallowed-tools: Read, Write, Bash(dbt:*)argument-hint: [model_name]---Analyze $ARGUMENTS and generate appropriate tests:
1. Read the model SQL and existing schema.yml2. Identify primary key columns → add unique + not_null3. Find foreign key relationships → add relationships tests4. Detect enum-like columns → add accepted_values5. Find numeric ranges → add appropriate range tests
Update the schema.yml with new tests.The description shows up when you type / in Claude Code, helping you find the right command.
The allowed-tools field restricts which tools Claude can use during execution. Read and Write for file operations, Bash(dbt:*) for any dbt CLI command. This is a security boundary — you can create commands that read but never write, or commands that execute dbt but never touch the filesystem directly.
The argument-hint tells users what to pass. When you type /generate-tests, Claude prompts for arguments. The hint [model_name] makes it clear you should provide a model name.
$ARGUMENTS in the body gets replaced with whatever the user passes after the command name.
/generate-tests: Test Scaffolding
The test generation command analyzes an existing model and produces appropriate dbt tests:
---description: Generate comprehensive dbt tests for a modelallowed-tools: Read, Write, Bash(dbt:*)argument-hint: [model_name]---Analyze $ARGUMENTS and generate appropriate tests:
1. Read the model SQL and existing schema.yml2. Identify primary key columns → add unique + not_null3. Find foreign key relationships → add relationships tests4. Detect enum-like columns → add accepted_values5. Find numeric ranges → add appropriate range tests
Update the schema.yml with new tests.Save it as .claude/commands/generate-tests.md. Usage: /generate-tests mrt__sales__orders.
Claude reads the model’s SQL, identifies columns by their transformations, infers what tests make sense, and updates the schema.yml. It catches the obvious stuff — primary key tests, referential integrity, enum validations — which is exactly the kind of tedious work that’s easy to skip manually. The output is a solid starting point that requires review before committing.
This command pairs naturally with TDD workflows: use /generate-tests on an existing model to retroactively add test coverage, then use the TDD pattern for new models.
/document-model: Model Documentation
Documentation is always behind. The /document-model command tackles the tedious part — reading SQL and writing descriptions — so you can focus on reviewing and refining:
---description: Generate comprehensive dbt documentation for a modelallowed-tools: Bash(dbt:*), Read, Writeargument-hint: [model_name]---# Document Model: $ARGUMENTS
1. Read the model SQL file at models/**/$ARGUMENTS.sql2. Identify all columns and their transformations3. Check if schema.yml exists for this model4. Generate or update the schema.yml with: - Model description explaining business purpose - Column descriptions explaining meaning and source - Appropriate tests for each column
## Documentation Standards- Model descriptions: Start with "This model..." and explain the grain- Column descriptions: Include data type, business meaning, and source- Use consistent terminology from our data dictionary
Create/update the schema.yml file and show a summary of changes.Save as .claude/commands/document-model.md. Usage: /document-model int__ga4_sessions_sessionized.
The documentation standards section is where team conventions get encoded. “Start with ‘This model…’” ensures consistency across models. “Include data type, business meaning, and source” ensures descriptions are actually useful, not just column name restatements.
For a deeper look at the documentation workflow — including the two-step codegen pattern and docs blocks for consistency — see dbt Documentation with Claude Code.
/test-prompt: Prompt Validation
Before running a complex prompt on production models, validate it on a sandbox model:
---description: Test a prompt on a sandbox model before production useallowed-tools: Read, Bash(dbt:*)argument-hint: [prompt to test]---Run the following prompt on the sandbox model `test__sandbox__example`:
$ARGUMENTS
After execution, validate:1. Output uses correct BigQuery syntax2. All ref() and source() calls resolve3. No SELECT * patterns4. Document any edge cases or unexpected behavior
Report results. Do not modify any production models.This is particularly useful when creating new slash commands. Test the prompt logic on a disposable model before deploying it across your project.
Subagents: Specialized Commands
For complex, focused tasks, you can create subagent definitions in .claude/agents/. A subagent gets fresh context — no baggage from the current conversation — and focuses entirely on its task.
---name: sql-debuggerdescription: Specialized agent for debugging SQL and dbt issues---
You are an expert SQL debugger specializing in BigQuery and dbt.
## Your Approach
1. **Gather Evidence First** - Read error logs completely - Check compiled SQL - Review recent git changes to the model
2. **Form Hypotheses** - List possible causes ranked by likelihood - Consider: data issues, schema changes, logic errors
3. **Test Systematically** - Isolate the problem with minimal reproducing case - Use WHERE clauses to test on small data - Add CTEs to inspect intermediate results
4. **Fix and Verify** - Implement the minimum change to fix the issue - Add tests to prevent regression - Document the root causeInvoke it with a prompt like: “Use the sql-debugger agent to investigate why int__ga4__sessions_attributed is returning NULL for channel_grouping on 5% of sessions.”
The subagent pattern is most useful when your main conversation is cluttered with other context and you need Claude to focus entirely on one problem. See Debugging dbt with Claude Code for more on when subagents help with debugging.
Team Distribution
The real power of slash commands is team standardization. Commit .claude/commands/ to git and every team member gets:
- The same test generation workflow
- The same documentation standards
- The same quality checks
- The same project context
New team members don’t need to learn the right prompts. They type / and see what’s available. The institutional knowledge lives in the commands, not in individual heads.
Designing Good Commands
A few principles that make commands more reliable:
Be specific about output format. “Update the schema.yml with new tests” is better than “generate some tests.” Claude knows exactly what artifact to produce.
Constrain with allowed-tools. A documentation command probably doesn’t need Bash access. A test command probably doesn’t need Write access to non-YAML files. Tighter scopes mean fewer surprises.
Include standards, not just steps. “Model descriptions: Start with ‘This model…’” encodes a convention. Without it, Claude generates descriptions in whatever style feels natural for that conversation.
Use $ARGUMENTS for the variable part only. The workflow steps, quality standards, and constraints should be fixed. Only the model name or specific target should come from the user.
Test new commands on sandbox models first. Use the /test-prompt pattern — or just run the command manually on a non-critical model — before trusting it with production code.