This note covers the format conventions for agent-generated Slack KPI summaries and the approach to ensuring calculation reliability. Consistent output requires deliberate instruction; reliable math requires pre-calculating values in SQL rather than delegating arithmetic to the LLM.
The Template
This structure works well for weekly client-facing summaries:
📊 Weekly KPI Report — Acme CorpWeek of March 10-16, 2026
Sessions: 12,847 (↑ 8% vs. last week)Conversions: 342 (↓ 3%)Conversion rate: 2.66% (↓ 0.3pp)Revenue: €47,291 (↑ 12%)Avg order value: €138.28 (↑ 15%)
Top traffic sources:1. Organic Search — 6,204 sessions (↑ 11%)2. Paid Search — 3,102 sessions (→ flat)3. Direct — 2,441 sessions (↓ 5%)
Notable: Revenue up despite lower conversion rate.Higher AOV driven by spring campaign launch.The design choices are explained below.
The Directional Arrow Convention
↑, ↓, and → give an instant visual read without requiring the reader to compare two numbers. The arrow communicates direction; the percentage quantifies magnitude. Together they answer the primary question (“did this go up or down?”) before the reader processes the specific numbers.
Using → flat instead of (0%) for near-zero changes acknowledges that a 0.3% change in a noisy metric isn’t meaningfully different from flat. It’s an editorial judgment that belongs in the format, not something to leave to the agent’s discretion on each run.
Define thresholds in your agent instructions:
Direction conventions:- ↑ for increases greater than 1%- ↓ for decreases greater than 1%- → flat for changes between -1% and +1%This prevents the agent from inventing its own threshold interpretation run to run.
Percentage Points vs. Percentages
The conversion rate line says ↓ 0.3pp, not ↓ 10%. This distinction matters.
If conversion rate goes from 2.96% to 2.66%, that’s a 0.3 percentage point decrease. It’s also a 10% relative decrease (0.3 / 2.96). Both are true. But “down 10%” sounds alarming in a way that “down 0.3 percentage points” doesn’t. For a rate metric, the absolute change (pp) is usually more informative to the person making decisions.
Using % for relative changes on volume metrics (sessions, revenue) and pp for absolute changes on rate metrics (conversion rate, click-through rate, bounce rate) is the standard in professional reporting. Build this distinction into your agent instructions explicitly, because LLMs will default to whichever phrasing sounds most natural in context — which is inconsistent.
Formatting rules:- Use % for volume metric changes (sessions up 8%)- Use pp for rate metric changes (conversion rate down 0.3pp)The “Notable” Interpretation Line
The single-sentence interpretation at the bottom of the summary is the most valuable and most dangerous part of the template.
Valuable: it synthesizes individual metric changes into a single explanatory sentence — “Revenue up despite lower conversion rate, driven by higher AOV” — that would otherwise require a follow-up question.
Dangerous: LLMs infer patterns from timing and numbers without knowing the actual cause. An agent inferring “spring campaign launch drove AOV” is producing a plausible interpretation, not a verified one. Clients treat written interpretations as authoritative, especially in automated reports.
The “Notable” line should be treated as a draft and reviewed before client delivery.
An alternative that reduces risk: replace the open-ended “Notable” with a structured prompt:
If any metric changed by more than 15% week-over-week, explain it in one sentence.If no metric changed by more than 15%, omit this section.This limits the interpretation to cases where something genuinely notable happened, rather than generating an interpretation whether or not there’s anything meaningful to say.
Pushing Math into SQL
LLMs produce plausible-looking numbers but are not reliable calculators. The same input can produce different percentage values in different runs. Pre-calculating all values in SQL and having the agent narrate pre-computed results eliminates this variability.
Pre-calculate every number in the SQL query:
SELECT this_week.sessions, last_week.sessions AS sessions_last_week, ROUND( (this_week.sessions - last_week.sessions) * 100.0 / last_week.sessions, 1 ) AS sessions_pct_change, this_week.conversions, ROUND(this_week.conversions * 100.0 / this_week.sessions, 2) AS conversion_rate, ROUND(last_week.conversions * 100.0 / last_week.sessions, 2) AS conversion_rate_last_week, ROUND( (this_week.conversions * 100.0 / this_week.sessions) - (last_week.conversions * 100.0 / last_week.sessions), 2 ) AS conversion_rate_pp_changeFROM ...The agent receives pre-calculated values and narrates them: “Sessions: 12,847 (↑ 8.0%)” rather than computing 8.0 from the raw numbers. See KPI Reporting via Direct Warehouse Queries for the full SQL approach.
The same principle applies to the directional arrows. Add a column that the agent can use directly:
SELECT ..., CASE WHEN sessions_pct_change > 1 THEN '↑' WHEN sessions_pct_change < -1 THEN '↓' ELSE '→' END AS sessions_directionFROM ...The agent pastes ↑ 8.0% rather than deciding whether 8% qualifies as “up.”
Format Consistency Across Runs
The biggest formatting challenge with agent-delivered reports isn’t getting the format right the first time — it’s keeping it consistent week to week. Agents tend to drift. The summary that looks one way on week 1 might have different section ordering, different emoji, and different phrasing by week 4, because the model is generating text rather than following a rigid template.
Two approaches stabilize this:
Give the agent an example output. Include the full template as a literal example in your cron message or skill file. Agents pattern-match against examples more reliably than they follow abstract format descriptions. “Format the output like this: [paste the template]” outperforms “use arrows for directional changes and include a notable line.”
Store the template in a file. Put the format template in a Markdown file (~/reports/acme/summary-template.md) and instruct the agent to follow it: “Format the results following the structure in the summary-template.md file.” This also makes the template easy to iterate without touching the cron job.
See OpenClaw Skills for Monitoring for how the same example-template approach works for monitoring alert formatting — the same principles apply here.
What to Verify Before Sending to Clients
Even with pre-calculated SQL numbers and a rigid template, a few things warrant manual verification before client-facing reports go out at scale:
- The directional arrows match the percentage changes. An ↑ on a declining metric means something broke.
- The “Notable” line doesn’t contain a confident wrong interpretation. A quick scan is enough.
- The date range in the header matches the actual data. The agent sometimes miscomputes “last week” in the header text even when the SQL date logic is correct.
- Currency and unit formatting matches the client’s expectation. A client in France expecting euros doesn’t want to see dollar signs.
Manual spot-checking adds time but reduces the risk of sending wrong numbers to clients. Building a brief scan into a weekly workflow is reasonable until the output has been validated at sufficient volume to trust at arm’s length.