dbt Labs published a set of official Agent Skills — Markdown files designed to be loaded into AI tools like Claude Code, Codex, Cursor, and Kilo Code before a coding session. They encode dbt best practices so the agent understands conventions before doing the work. The concept is the same as CLAUDE.md, but standardized and maintained by dbt Labs. Where a CLAUDE.md captures project-specific conventions, Agent Skills capture general dbt best practices.
What the Skills Cover
The published skills address the full analytics engineering workflow:
- Analytics engineering with dbt — the core loop: how to structure models, which layer to use, how to name things
- Unit testing — when to write unit tests, how to structure them, what the test grammar looks like
- Semantic layer construction — building MetricFlow semantic models and dimension definitions
- Natural language queries — using the dbt Semantic Layer to answer questions in plain language
- Error troubleshooting — diagnosing and resolving common dbt build failures
Each skill is a Markdown file. You drop it into your project (typically alongside your CLAUDE.md or in a .claude/commands/ directory) and reference it in your prompt or include it in your Claude Code session context. The agent reads it as instruction before doing the work.
The Benchmark Numbers
dbt Labs ran their Skills against the ADE-bench (Analytics Data Engineering benchmark) — a standardized evaluation of AI agent performance on realistic dbt tasks.
Without Skills: 56% accuracy on the benchmark. With Skills: 58.5% accuracy.
ADE-bench is designed to plateau — most tools cluster around 55–60% — and gains above that level are difficult to achieve. On a benchmark where the ceiling is stubborn, a consistent 2.5 percentage point improvement is meaningful signal.
Altimate AI published separate benchmarks for their open-source dbt and Snowflake skills: 19% improvement on real-world data tasks, and 22% faster SQL execution on TPC-H benchmarks. These numbers come from a different methodology (real-world tasks vs. standardized benchmark), so they’re not directly comparable, but they point in the same direction: skills files with explicit dbt context measurably improve agent output.
How Skills Differ from Project-Level Context
The distinction between Agent Skills and project-specific context (CLAUDE.md, slash commands) is worth understanding clearly.
Agent Skills encode what good dbt practice looks like in general. “When building an intermediate model, prefer CTEs over subqueries.” “Always add not_null and unique tests to primary key columns.” “Use the base__ prefix for models that directly select from sources.” This knowledge applies to any dbt project.
Project context (CLAUDE.md) encodes what your specific project does. “Our date dimension is at marts/common/dim_date. Don’t recreate it.” “We use double-underscore naming: model__column_name.” “Our BigQuery project is production-analytics.” This knowledge only applies to your project.
Both matter, and they’re additive. Skills prevent generic mistakes (using the wrong layer, adding tests in the wrong place). Project context prevents project-specific mistakes (recreating an existing asset, using the wrong naming convention).
The CLAUDE.md configuration pattern is how you add project-level context. Agent Skills are the tool-level complement — what dbt Labs thinks the agent should know about dbt before it touches your project.
Cross-Tool Compatibility
One underappreciated aspect of the Agent Skills approach is that the same Markdown files work across AI tools. Claude Code, Codex, Cursor, and Kilo Code can all consume them because they’re plain text — not tool-specific configuration formats.
This matters for teams using multiple tools. If you use Claude Code for deep development sessions and Cursor for in-the-flow edits, the same Skills files provide consistent dbt knowledge to both. The agent behaves consistently because it’s reading the same instructions, regardless of which tool is running.
The broader dbt-as-knowledge-base pattern extends this: the same project structure that feeds Skills can also feed the dbt MCP server, which exposes project metadata and the Semantic Layer to any MCP-compatible AI tool. Skills and MCP complement each other — Skills teach general practices, MCP provides project-specific data.
Limitations
Skills files are context, not guardrails. An agent that loads a skills file can still produce wrong output — it’s read the instructions, but it doesn’t have a mechanism to verify that its output follows them. The failure modes from missing business context (wrong join types, incorrect assumptions about grain) aren’t prevented by Skills files that cover structural conventions.
Skills files raise the floor, not the ceiling. An agent without dbt context makes more obvious structural mistakes. An agent with Skills files makes fewer, but still makes judgment calls that require knowledge of specific business logic and data that no generic skills file can provide. Running dbt build, checking test results, and reviewing model logic remains necessary regardless of Skills configuration.
Getting Started
dbt Labs maintains the official Skills in the dbt documentation. Altimate’s open-source skills for dbt and Snowflake are available through their GitHub repository.
The practical setup: add the relevant Skills files to your project, reference them in your CLAUDE.md or include them in your initial prompt. For Claude Code specifically, the Claude Code Skills Activation note covers how the skills loading mechanism works and how to write effective skill descriptions.
If you’re building on top of a well-documented dbt project — good model descriptions, documented columns, up-to-date schema YAML — the Skills files compound with that documentation. The agent reads your project’s documentation as context and the Skills files as instruction. Both together produce better output than either alone.