ServicesAboutNotesContact Get in touch →
EN FR
Note

Multi-Client Agent Reporting Architecture

How to structure per-client isolation for OpenClaw reporting workflows — separate cron jobs, credential management at scale, failure containment, and the security tradeoffs of running multiple clients on a single machine.

Planted
automationdata engineeringai

Running an OpenClaw reporting workflow for multiple clients requires deliberate isolation design. The jump from one client to several introduces questions the single-client setup doesn’t surface: preventing one client’s data from appearing in another’s report, handling credential expiration per-client, and containing failures to a single client rather than all of them.

One Cron Job Per Client, Full Stop

Each client gets its own isolated cron job. This isn’t a recommendation — it’s a requirement for correct behavior.

An isolated cron job runs in its own dedicated agent session, separate from every other job and from your active conversation. Client A’s data doesn’t touch Client B’s context. A failure in Client A’s credential doesn’t block Client B’s report from running. The jobs are independent, which is exactly what you want when each is handling a different client’s data.

A Monday morning workflow for three clients looks like this:

Terminal window
# Client A: GA4 + BigQuery, 6 AM
openclaw cron add --name "client-acme-weekly" \
--cron "0 6 * * 1" \
--tz "Europe/Paris" \
--session isolated \
--message "Run the weekly KPI report for Acme Corp. Pull GA4 sessions and conversions for the past 7 days. Query BigQuery for revenue metrics using ~/reports/acme/revenue-weekly.sql. Compare this week to last week. Format as a Slack summary and post to the channel." \
--announce \
--channel slack \
--to "channel:C0ACME_REPORT"
# Client B: Snowflake + dashboard, 6:10 AM
openclaw cron add --name "client-pvcp-weekly" \
--cron "10 6 * * 1" \
--tz "Europe/Paris" \
--session isolated \
--message "Run the weekly KPI report for PVCP. Query Snowflake for sessions and conversion metrics using ~/reports/pvcp/snowflake-weekly.sql. Format as a Slack summary and post to the channel." \
--announce \
--channel slack \
--to "channel:C0PVCP_REPORT"
# Client C: BigQuery only, 6:20 AM
openclaw cron add --name "client-xyz-weekly" \
--cron "20 6 * * 1" \
--tz "Europe/Paris" \
--session isolated \
--message "Run the weekly KPI report for XYZ. Query BigQuery for pipeline metrics using ~/reports/xyz/pipeline-weekly.sql. Format as a Slack summary and post to the channel." \
--announce \
--channel slack \
--to "channel:C0XYZ_REPORT"

Stagger the start times by 10 minutes. If each report takes 5 minutes to run, this prevents overlap without requiring you to know in advance exactly how long each job takes. The isolated session mode means overlap wouldn’t cause data contamination anyway, but serial execution avoids competing for API rate limits.

Failure Isolation: Why This Architecture Matters

If Client A’s GA4 OAuth token expires at 5 AM on Monday, that job fails. Client B and Client C get their reports on time, as scheduled. OpenClaw applies exponential backoff to the failed Client A job and retries it; you receive a notification about the failure.

Without per-client isolation, a single shared configuration would cascade: one credential failure could leave you without any reports for any client. With isolation, the blast radius of any single failure is exactly one client’s report.

See OpenClaw Cron Scheduler Mechanics for the full mechanics of how OpenClaw handles retries, backoff, and failure notification. The key point for multi-client setups is that each job has its own failure state, its own retry queue, and its own notification path.

The Credential Management Reality Check

This is where the architecture gets uncomfortable. Each client’s data sources require credentials:

  • GA4: OAuth tokens or service account keys
  • BigQuery: service account JSON files
  • Snowflake: username, password, account identifier
  • Looker: API credentials
  • Ad platforms: API tokens, OAuth tokens

OpenClaw stores configuration in plaintext files under ~/.openclaw/. On a machine running five clients’ reporting jobs, that’s five sets of credential files, all accessible to any process running as your user.

Infostealers — RedLine, Lumma, Vidar — have added OpenClaw’s config file paths to their target lists. Hudson Rock documented the first in-the-wild exfiltration of a complete OpenClaw configuration. If your machine is compromised through any vector (phishing, malicious browser extension, compromised dependency), every client’s warehouse credentials are exposed.

The risk isn’t hypothetical. See OpenClaw Security Risks — What’s Documented for the documented incidents. The relevant question for agency work isn’t “is this risky?” (it is) but “what mitigations are available?”

Credential Isolation Options

Environment variables instead of config files. Instead of storing credentials in ~/.openclaw/config/, pass them as environment variables. This doesn’t eliminate the risk (environment variables are also readable by processes running as your user), but it keeps credentials out of the specific file paths that infostealers target.

Secrets manager integration. Pull credentials from a secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault) at job execution time rather than storing them locally. OpenClaw doesn’t have native secrets manager integration as of early 2026, but you can wrapper your CLI commands to fetch credentials before executing:

Terminal window
# Fetch credentials and execute in a single command
export BQ_CREDENTIALS=$(gcp-secrets-manager get client-acme-bigquery-key) && \
openclaw cron add ...

This adds complexity and depends on the agent correctly handling the credential injection, but it moves the sensitive material out of plaintext files.

Isolated VMs per client. The cleanest architecture: each client runs on a separate virtual machine with no shared credentials. Client A’s VM has only Client A’s credentials. A compromise of one VM doesn’t expose any other client’s data. This is what you’d implement if you were building this as a production service rather than a personal automation. The overhead (VM provisioning, maintaining five separate OpenClaw installations) is significant for a solo consultant.

Separate user accounts. Less isolation than VMs, more than a single shared environment. Each client gets a separate OS user account with its own home directory and its own ~/.openclaw/ path. A credential leak from one account doesn’t automatically expose another account’s credentials, assuming the compromise doesn’t escalate privileges.

None of these solutions are built into OpenClaw. You’re assembling the security architecture yourself.

Start Internal, Then Expand

The production-safe architecture for multi-client reporting requires meaningful engineering work. Before applying it to client-facing output, validate the workflow on internal reporting — your own team’s metrics, your own project data, nothing belonging to a client.

Internal use surfaces edge cases (wrong numbers, formatting issues, failed credential handling) before they affect clients, calibrates trust in the agent’s output accuracy, and establishes the credential management practices needed before client data is involved. A month of reliable internal reporting provides evidence about what actually works — the basis for deciding whether and which clients to include, given your current security posture.

Cost at Scale

Each weekly report consumes API tokens for the LLM call. A report that pulls from 2-3 data sources and generates a formatted summary costs roughly $0.50-2.00 per run, depending on how much data the agent processes and which model you’re using.

For five clients with weekly reports, that’s $10-40/month in API costs. Not prohibitive. Worth tracking as you add clients, since the costs compound faster than you might expect if clients start asking for additional reports (daily updates, mid-week check-ins).

See AI Tooling Cost for Solo Consultants for how these per-run costs fit into the broader AI tooling budget for an analytics consulting practice.