ServicesAboutNotesContact Get in touch →
EN FR
Note

Lightdash in Production: Kubernetes Deployment

Moving Lightdash from Docker Compose to Kubernetes with the community Helm chart — production checklist, external dependencies, authentication options, and upgrade strategy.

Planted
dbtanalyticsdata modeling

The Docker Compose setup works well for evaluation and small teams. When you’re ready for production — multi-user access, reliable scheduling, horizontal scaling, and managed infrastructure — Kubernetes with the community Helm chart is the path.

The Helm chart is community-maintained (not officially from Lightdash). That’s worth knowing: it may lag behind Lightdash releases, and issues get fixed by contributors rather than the Lightdash team directly. In practice, the chart is actively maintained and widely used.

Installing via Helm

Terminal window
# Add the Lightdash Helm repository
helm repo add lightdash https://lightdash.github.io/helm-charts
helm repo update
# Install with a custom values file
helm install lightdash lightdash/lightdash \
--namespace lightdash \
--create-namespace \
-f values.yaml

Start with the default values file from the chart repository and override what you need in your own values.yaml. Don’t override values inline with --set for more than one or two options — it becomes unmaintainable quickly.

A minimal values.yaml for a production deployment:

image:
tag: "latest" # pin to a specific version in production
replicaCount: 2 # for availability, not load balancing — stateless replicas
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
config:
lightdashSecret: "" # override via secret, not here
siteUrl: "https://lightdash.yourdomain.com"
auth:
google:
clientId: "" # from secret
clientSecret: "" # from secret
postgresql:
enabled: false # use external PostgreSQL
externalDatabase:
host: "your-managed-postgres-host"
port: 5432
user: "lightdash"
database: "lightdash"
existingSecret: "lightdash-db-secret"
existingSecretPasswordKey: "password"
browserless:
enabled: true
scheduler:
enabled: true

Sensitive values (LIGHTDASH_SECRET, database credentials, OAuth credentials) should live in Kubernetes Secrets, not in values.yaml. Reference them via existingSecret where the chart supports it, or mount them as environment variables from secret resources.

Production Checklist

External PostgreSQL

Do not run PostgreSQL as a container in a production Kubernetes deployment. Use a managed database service: Cloud SQL (GCP), RDS (AWS), Azure Database for PostgreSQL. Managed databases give you:

  • Automated backups
  • Point-in-time recovery
  • Failover without manual intervention
  • No risk of losing application state when a pod restarts

Enable the uuid-ossp extension in your managed database before the first Lightdash startup. On Cloud SQL, this requires a superuser connection or a Cloud SQL Admin user.

S3-Compatible Object Storage

Production Lightdash needs object storage for:

  • CSV exports (Cloud Pro only, but the bucket is configured at the infrastructure level)
  • Dashboard ZIP exports
  • Chart images used in Slack and email deliveries

Configure with any S3-compatible service: AWS S3, Google Cloud Storage (with S3 compatibility API enabled), MinIO for on-premises deployments.

In values.yaml:

config:
s3:
endpoint: "" # leave empty for AWS, set for GCS or MinIO
accessKey: "" # from secret
secretKey: "" # from secret
bucket: "lightdash-prod"
region: "eu-west-1"

For GCS, enable the S3-compatible interoperability API in your GCP project and use HMAC keys (not service account keys) for the access and secret key pair.

HTTPS

Lightdash must be served over HTTPS in production. Two common approaches in Kubernetes:

Load balancer with managed TLS: Your cloud provider’s load balancer (GCP HTTPS Load Balancer, AWS ALB) terminates TLS. Lightdash receives plain HTTP from the load balancer. Set TRUST_PROXY=true so Lightdash trusts the X-Forwarded-Proto header.

Ingress controller with cert-manager: An nginx Ingress controller with cert-manager handles TLS termination inside the cluster. cert-manager automates Let’s Encrypt certificate provisioning and renewal.

Either approach works. The load balancer path is simpler if you’re already running on a managed Kubernetes service with cloud load balancers.

SMTP for Email Scheduling

Email-based scheduling requires an SMTP server:

config:
smtp:
host: "smtp.sendgrid.net"
port: 587
secure: false
starttls: true
auth:
user: "apikey"
pass: "" # from secret
from:
name: "Lightdash"
email: "lightdash@yourdomain.com"

Any SMTP provider works: SendGrid, Postmark, SES, or a self-hosted mail server. SendGrid is the most common choice for organizations already on AWS or GCP.

Authentication at the Kubernetes Scale

Authentication options are the same as Docker Compose, but at the Kubernetes scale the tier limitations matter more:

Auth MethodTier Required
Google OAuthOpen Source (free)
Okta (OIDC)Cloud Pro ($2,400/mo)
Azure ADEnterprise
Generic OIDCEnterprise
SAMLEnterprise
SCIM 2.0 provisioningEnterprise

If your organization requires SSO via anything other than Google, you’re not actually running the free self-hosted version — you’re paying for Cloud Pro or Enterprise, and at that point the hosted Lightdash Cloud offering is worth comparing against the infrastructure cost of running your own Kubernetes cluster.

For organizations that are Google Workspace shops and can use Google OAuth, the open-source tier with Kubernetes covers most production use cases.

Upgrade Strategy

Lightdash ships at an aggressive cadence — multiple releases per week. Each upgrade may include database migrations. A safe upgrade pattern:

  1. Pin your image tag in values.yaml rather than using latest. Review the changelog before bumping the version.
  2. Upgrade during low-traffic windows. Database migrations run at pod startup and can hold locks for seconds to minutes.
  3. Monitor database migrations. Check pod startup logs immediately after upgrade. A failed migration leaves the database in a partial state.
  4. Keep one replica running during rolling updates if using replicaCount: 2. Kubernetes handles this by default with a rolling update strategy, but verify your strategy configuration:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1

maxUnavailable: 0 ensures at least one pod is always serving requests during the upgrade.

Known Kubernetes-Specific Issues

The login form rendering issue (GitHub #10135) affects some Kubernetes deployments — the login page renders without the form inputs visible. This is typically caused by incorrect SITE_URL configuration or a missing TRUST_PROXY=true setting. Verify both in your configuration before debugging deeper.

Database migration lock contention is more visible in Kubernetes than in Docker Compose because Kubernetes may spin up a new pod before the old one fully terminates. The combination of two pods both attempting to run migrations at startup can cause lock timeouts. Monitor the migration logs from the first pod to complete startup before scaling replicas.

Cost Comparison: Self-Hosted vs Cloud Pro

Running Lightdash on Kubernetes for a mid-size team requires:

  • A managed PostgreSQL instance (~$50-100/month)
  • GKE/EKS/AKS cluster nodes capable of running Lightdash pods (~$100-200/month for a small cluster)
  • Object storage and SMTP (minimal cost)
  • Engineering time to manage upgrades, monitor deployments, and troubleshoot issues

The Lightdash Cloud Pro tier costs $2,400/month with unlimited users and includes features not available in open source: scheduling, CSV exports, caching, AI features, priority support. Self-hosted Kubernetes is cost-effective when existing Kubernetes infrastructure is in place, the team has operational capacity, and open-source feature limitations are not blocking use cases.

See BI Tool Self-Hosting and Licensing for the full feature comparison across tiers.