Docker Compose is the fastest way to run Lightdash locally or on a small VM. Three services are required: the Lightdash application itself, a PostgreSQL database for metadata storage, and a headless browser for generating chart images. Getting these three to talk to each other is the whole of the self-hosted setup story.
Lightdash is MIT-licensed, which means you can run it without fees, without user limits, and without any obligation to open-source your customizations. The self-hosted version is a real product — teams use it in production for small deployments. It is also deliberately incomplete: scheduling, smart caching, AI features, and CSV exports are gated behind the Cloud Pro tier. Understanding those limits before you deploy avoids surprises later. The licensing breakdown covers the feature split in detail.
Required Services
Lightdash Application
The main application runs on port 8080 by default (important — the documentation and some default configurations reference port 3000, which is wrong; see the gotchas section below).
PostgreSQL
Lightdash uses PostgreSQL for application state: user accounts, dashboard definitions, saved charts, and scheduled delivery configurations. Two requirements that catch people:
- The database must have the
uuid-osspextension installed. Standard PostgreSQL images have it available but it needs to be enabled. - Do not run PostgreSQL as a container in production. For a multi-person team, use a managed database service (Cloud SQL, RDS, whatever your cloud provider offers) and configure backups. A container database that loses its volume loses all your dashboard definitions.
For a local evaluation or proof of concept, a PostgreSQL container is fine.
Headless Browser (Browserless)
Lightdash uses a headless Chromium browser to render charts for two purposes: generating images in Slack messages when a scheduled report fires, and producing chart screenshots for email deliveries. Without this service, the application runs normally but scheduled deliveries won’t include visual chart snapshots.
The official configuration uses browserless (a Docker image that runs Chromium as a service). It is the most memory-intensive of the three services.
A Working Docker Compose Configuration
version: '3'services: lightdash: image: lightdash/lightdash:latest ports: - "8080:8080" environment: - PGHOST=postgres - PGPORT=5432 - PGUSER=${PGUSER} - PGPASSWORD=${PGPASSWORD} - PGDATABASE=${PGDATABASE} - LIGHTDASH_SECRET=${LIGHTDASH_SECRET} - SITE_URL=${SITE_URL} - SCHEDULER_ENABLED=true - SECURE_COOKIES=true - TRUST_PROXY=true - BROWSER_ENDPOINT=http://browserless:3000 depends_on: - postgres - browserless
postgres: image: postgres:15 environment: - POSTGRES_USER=${PGUSER} - POSTGRES_PASSWORD=${PGPASSWORD} - POSTGRES_DB=${PGDATABASE} volumes: - postgres_data:/var/lib/postgresql/data
browserless: image: browserless/chrome:latest ports: - "3000:3000"
volumes: postgres_data:Launch with:
docker compose -f docker-compose.yml --env-file .env up --detach --remove-orphansEnvironment Variables
All sensitive values go in .env. Never commit this file.
| Variable | Purpose |
|---|---|
LIGHTDASH_SECRET | Encrypts sessions and data at rest. Must be 32+ characters. Generate with openssl rand -hex 32. |
SITE_URL | The public-facing URL of your Lightdash instance. Used for generating links in notifications and emails. |
PGHOST, PGPORT, PGUSER, PGPASSWORD, PGDATABASE | PostgreSQL connection details |
SCHEDULER_ENABLED=true | Activates scheduled deliveries. Without this, the scheduling UI is hidden. |
SECURE_COOKIES=true | Required when serving over HTTPS. Without it, session cookies won’t work correctly behind TLS. |
TRUST_PROXY=true | Required behind any reverse proxy (nginx, Caddy, a load balancer). Tells Lightdash to trust the X-Forwarded-* headers. |
For authentication, the open-source tier supports Google OAuth only:
AUTH_GOOGLE_CLIENT_ID=your_client_idAUTH_GOOGLE_CLIENT_SECRET=your_client_secretOkta (OIDC) requires Cloud Pro. Azure AD, OneLogin, generic OIDC, SAML, and SCIM 2.0 are Enterprise-only. If your organization mandates SSO via anything other than Google, factor in the tier costs before committing to self-hosted.
A .env template:
# Generate with: openssl rand -hex 32LIGHTDASH_SECRET=your_32_character_secret_here
# Your public URL (no trailing slash)SITE_URL=https://lightdash.yourdomain.com
# PostgreSQLPGHOST=postgresPGPORT=5432PGUSER=lightdashPGPASSWORD=your_database_passwordPGDATABASE=lightdash_db
# Google OAuthAUTH_GOOGLE_CLIENT_ID=your_google_client_idAUTH_GOOGLE_CLIENT_SECRET=your_google_client_secretKnown Gotchas
Port 3000 vs Port 8080
The Lightdash application runs on port 8080. The default SITE_URL in some documentation and sample configurations references port 3000. This mismatch causes authentication redirects to fail silently — OAuth callbacks go to the wrong port and users get error pages instead of a working login. Always verify your SITE_URL matches the port the container is actually listening on. GitHub issue #10893 tracks this.
PostgreSQL uuid-ossp Extension
If you see errors on first startup about missing UUID functions, the extension isn’t enabled. The cleanest fix is to add it in an init script mounted into the PostgreSQL container:
-- init.sqlCREATE EXTENSION IF NOT EXISTS "uuid-ossp";postgres: image: postgres:15 volumes: - postgres_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sqlDatabase Migrations During Upgrades
Lightdash ships multiple releases per week (the pace reached v0.2258.1 in December 2025). Each release may include database migrations. On upgrade, the application runs migrations automatically at startup. For small deployments this is fine, but on a shared database with active users, monitor for lock contention during the migration window. Upgrade during low-traffic periods.
The --remove-orphans Flag
Always include --remove-orphans in your compose up command. When Lightdash updates its Docker Compose spec (adding a service, removing a service, renaming a container), orphaned containers from the previous configuration will remain running without this flag. They don’t cause obvious errors but consume resources and can produce confusing behavior.
Sizing for Small Teams
A single GCP e2-medium instance (2 vCPU, 4GB RAM) or equivalent handles the load comfortably for a small team doing proof-of-concept and early production work. The headless browser is the most memory-hungry service, so if you’re running into memory pressure, that’s the first thing to constrain or move to a separate host.
For anything beyond a small team — more than 10-15 active users, high-frequency scheduled reports, or multiple concurrent dashboard viewers — move to the Kubernetes deployment with proper horizontal scaling.
What Self-Hosting Includes and Excludes
The Docker Compose deployment includes: Explore queries, dashboard building, multi-model joins, Vega-Lite custom charts, user management, and basic scheduling (requires SCHEDULER_ENABLED=true and the browserless service).
Not available on self-hosted: CSV exports, smart caching, AI features (natural language querying, MCP Server), embedding via the React SDK, API access, and SSO beyond Google OAuth. The jump from free self-hosted to Cloud Pro ($2,400/month) is steep, with no intermediate tier.