Snowflake charges by credit consumption, not by bytes processed. This makes cost monitoring conceptually different from BigQuery: you’re tracking how long warehouses run and how many credits they consume, not how much data each query scanned. The upside is predictability — credits per warehouse are consistent. The challenge is translation: most stakeholders don’t know what a Snowflake credit costs, so the monitoring needs to do the math.
Two system views cover most cost monitoring needs: SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY for per-query breakdowns and SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY for per-warehouse totals.
Per-Warehouse Query Breakdown
Start with QUERY_HISTORY for a detailed view of which warehouses are doing what:
SELECT warehouse_name, COUNT(*) AS query_count, ROUND(SUM(credits_used_cloud_services), 2) AS cloud_credits, ROUND(SUM(total_elapsed_time) / 1000 / 60, 1) AS total_minutesFROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORYWHERE start_time >= DATEADD('day', -1, CURRENT_TIMESTAMP()) AND warehouse_name IS NOT NULLGROUP BY warehouse_nameORDER BY cloud_credits DESCcredits_used_cloud_services captures cloud services overhead — metadata operations, result caching lookups, and similar activities that consume credits without running on the warehouse itself. This is typically a small fraction of total cost but worth tracking separately.
total_elapsed_time is in milliseconds; dividing by 1,000 then 60 converts to minutes. The total minutes figure helps correlate credit consumption with actual runtime — a warehouse consuming 10 credits over 5 minutes is working harder than one consuming 10 credits over 2 hours.
Note that QUERY_HISTORY has a latency of up to 45 minutes. Monitoring jobs checking the last 24 hours will be slightly behind real-time.
Per-Warehouse Metering History
For a cleaner total cost view without per-query granularity, WAREHOUSE_METERING_HISTORY is more straightforward:
SELECT warehouse_name, ROUND(SUM(credits_used), 2) AS credits_used, ROUND(SUM(credits_used) * 3.00, 2) AS estimated_cost_usdFROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORYWHERE start_time >= DATEADD('day', -1, CURRENT_TIMESTAMP())GROUP BY warehouse_nameORDER BY credits_used DESCThe * 3.00 multiplier converts credits to estimated dollars using standard Snowflake credit pricing. Enterprise customers with negotiated rates should adjust this multiplier — the actual per-credit cost appears in your contract or billing console. Using the wrong multiplier won’t affect the relative ranking of warehouses (which one costs the most) but will give wrong absolute dollar figures in the alerts.
This query gives you a clean daily cost summary: “The TRANSFORM_WH consumed 18.2 credits ($54.60) yesterday. The REPORTING_WH consumed 4.1 credits ($12.30).” That’s the format a stakeholder can act on without understanding what a credit is.
Translating Credits for Non-Technical Audiences
For monitoring alerts reaching non-technical audiences, express costs in dollars rather than credits. “Yesterday’s Snowflake spend was $247, which is 2.3x the daily average of $107. The TRANSFORM_WH accounted for 68% of the cost” is more actionable than the equivalent in credit counts for finance, operations, or executive stakeholders.
When configuring an AI agent to deliver these summaries, add this explicitly to the skill:
## Reporting for Non-Technical Stakeholders
When reporting costs to non-technical audiences:- Convert credits to dollars using the contract rate ($3.00 per credit if unknown)- Lead with the dollar figure, not the credit count- Express anomalies as a ratio: "2.3x the daily average" rather than raw numbers- Name the warehouse responsible for the largest share of cost- Skip technical terminology (credits, warehouse size, compute types)Anomaly Detection Pattern
The same rolling comparison approach that works for BigQuery cost anomalies applies to Snowflake. Compare today’s credit consumption against the 7-day average and flag when the ratio exceeds a threshold:
WITH daily_usage AS ( SELECT DATE(start_time) AS usage_date, warehouse_name, SUM(credits_used) AS daily_credits FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY WHERE start_time >= DATEADD('day', -8, CURRENT_TIMESTAMP()) GROUP BY 1, 2),averages AS ( SELECT warehouse_name, AVG(daily_credits) AS avg_daily_credits FROM daily_usage WHERE usage_date < CURRENT_DATE() GROUP BY 1)
SELECT d.warehouse_name, d.daily_credits AS today_credits, ROUND(d.daily_credits * 3.00, 2) AS today_cost_usd, a.avg_daily_credits AS avg_credits, ROUND(d.daily_credits / NULLIF(a.avg_daily_credits, 0), 2) AS ratio_to_avgFROM daily_usage dJOIN averages a USING (warehouse_name)WHERE d.usage_date = CURRENT_DATE()ORDER BY ratio_to_avg DESCThis query returns one row per warehouse with today’s consumption, the recent average, and the ratio. Any ratio above 2.0 is worth flagging. Any ratio above 3.0 warrants immediate investigation.
The NULLIF(a.avg_daily_credits, 0) guard handles new warehouses that don’t yet have a week of history — without it, the division fails on zero averages.
What Drives Anomalous Snowflake Costs
When the ratio spikes, the most common causes:
Warehouse auto-suspend misconfigured. A development warehouse left running all night. Check WAREHOUSE_METERING_HISTORY for overnight credit consumption — if the warehouse ran continuously without queries, auto-suspend wasn’t working.
Query queuing on undersized warehouse. A warehouse that’s too small for the workload keeps spinning up and scaling out. The credits consumed per query go up. Check QUERY_HISTORY for queued_overload_time and queued_provisioning_time.
Full table scan replacing incremental. A dbt model that should run incrementally got run as a full refresh. If one warehouse’s ratio spikes without a change in query count, the individual query size changed. Check the bytes_scanned field in QUERY_HISTORY to find the outlier.
New report or scheduled query added. Someone added a recurring report that runs every hour. The credit count went up because the job frequency increased. Check query count versus the average — if it’s significantly higher, frequency is the driver.
For an AI agent investigating an anomaly, these causes map to specific follow-up queries. A well-written skill file can instruct the agent to run diagnostic follow-ups when the initial anomaly check fires, drilling into the specific warehouse that spiked.
Scheduling with OpenClaw
For a daily cost summary:
openclaw cron add --name "snowflake-daily-cost" \ --cron "0 8 * * *" \ --tz "Europe/Paris" \ --session isolated \ --message "Query Snowflake WAREHOUSE_METERING_HISTORY for yesterday's cost by warehouse. Convert credits to dollars at $3.00 per credit. Flag any warehouse where today's spend exceeds 2x the 7-day average. Format the summary in plain language without technical jargon." \ --announce \ --channel slack \ --to "channel:C1234567890"The monitoring query itself should live in a skill file so the cron message stays readable. The cron message handles the high-level instruction (“summarize and flag anomalies”); the skill provides the SQL and the formatting guidance.
Unlike BigQuery, Snowflake doesn’t have a free tier for metadata queries — running ACCOUNT_USAGE queries consumes cloud services credits. The consumption is minimal (fractions of a credit per query), but worth noting if you’re monitoring a cost-sensitive environment.
Relationship to dbt Monitoring
If you’re running dbt on Snowflake, the WAREHOUSE_METERING_HISTORY approach gives you warehouse-level cost visibility. For model-level cost attribution — understanding which dbt model drove the cost spike — you’ll need to cross-reference QUERY_HISTORY with dbt’s query tags or job labels.
dbt on Snowflake can be configured to tag queries with model metadata using the query_comment setting in dbt_project.yml:
query-comment: comment: "{{ {'app': 'dbt', 'dbt_model': model.name} | tojson }}" append: trueWith this in place, QUERY_HISTORY.query_text will contain a JSON comment with the model name, allowing attribution queries that map Snowflake credits back to specific dbt transformations.