One
docker compose upaway from a full observability platform. Traces, logs, metrics β collected, batched, stored, visualized. Multi-tenant ready. Zero hardcoded secrets. 16 env vars, fail-fast on every one.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β YOUR APPLICATIONS β
β (any language / any OTel SDK) β
β β
β DSN: http://<token>@localhost:14318?grpc=14317 β
β Header: uptrace-dsn β
ββββββββββββ¬βββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββ
β gRPC :4317 β HTTP :4318
βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OTel Collector (contrib 0.123.0) β
β β
β βββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ β
β β CLIENT PIPELINE β β SYSTEM PIPELINE β β
β β β β β β
β β receivers: β β receivers: β β
β β otlp (gRPC + HTTP) β β hostmetrics (7 scrapers) β β
β β β β postgresql/uptrace β β
β β processor: β β redis/uptrace β β
β β batch/clients β β httpcheck/self β β
β β 10K batch / 10s β β prometheus/self β β
β β metadata_keys: β β β β
β β [uptrace-dsn] β β processors: β β
β β βββ tenant-aware βββ β β batch/system (10K / 10s) β β
β β β β resourcedetection β β
β β exporter: β β β β
β β otlp/clients β β exporters: β β
β β (headers_setter ext) β β otlp/system β β
β β forwards uptrace-dsn β β prometheusremotewrite β β
β β from request context β β (static DSN header) β β
β ββββββββββββββ¬βββββββββββββ ββββββββββββββββ¬ββββββββββββββββ β
β β β β
βββββββββββββββββΌββββββββββββββββββββββββββββββββΌββββββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Uptrace (2.1.0-beta) β
β β
β UI Β· Alerting Β· Service Graph Β· Sourcemaps β
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β ClickHouse β β PostgreSQL β β Redis β β
β β 25.8.15.35 β β 17.9-alpine β β 8.6.1-alpineβ β
β β β β β β β β
β β traces β β users β β cache β β
β β logs β β orgs β β sessions β β
β β metrics β β projects β β β β
β β β β dashboards β β β β
β β ZSTD(1) β β alerts β β β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The OTel Collector runs two completely separate pipelines:
| Aspect | Client Pipeline | System Pipeline |
|---|---|---|
| Purpose | App telemetry from your services | Infrastructure self-monitoring |
| Auth mechanism | headers_setter/clients extension β forwards uptrace-dsn from incoming request metadata |
Static uptrace-dsn header injected at exporter level |
| Why separate? | System scrapers (hostmetrics, postgresql, redis) generate telemetry internally β there is no incoming HTTP/gRPC request carrying a DSN token. They cannot use headers_setter. |
|
| Batch processor | batch/clients with metadata_keys: [uptrace-dsn] |
batch/system (no metadata keys) |
| Pipelines | traces, logs, metrics/clients |
metrics/system-otlp, metrics/system-prometheus |
batch/clients declares metadata_keys: [uptrace-dsn], which means the collector creates separate batches per unique DSN value. This prevents cross-tenant data mixing when multiple projects send telemetry through the same collector.
All 16 environment variables use Docker Compose's ${VAR:?} syntax. If any variable is missing or empty, docker compose up refuses to start β no silent failures, no half-running stacks.
- PostgreSQL port is not exposed to the host β only accessible within the Docker network
- Redis, ClickHouse β internal only
- Only the OTel Collector OTLP ports (gRPC + HTTP) and Uptrace UI are exposed
| Component | Image | Role |
|---|---|---|
| Uptrace | uptrace/uptrace:2.1.0-beta |
Observability UI, alerting, service graph, sourcemaps |
| OTel Collector | otel/opentelemetry-collector-contrib:0.123.0 |
Telemetry ingestion, routing, batching (6 receivers, 5 pipelines) |
| ClickHouse | clickhouse/clickhouse-server:25.8.15.35 |
Columnar storage for traces, logs, metrics (ZSTD(1) compression) |
| PostgreSQL | postgres:17.9-alpine |
Metadata storage (users, orgs, projects, dashboards, alerts) |
| Redis | redis:8.6.1-alpine |
In-memory cache, session storage |
.
βββ docker-compose.yml # 5 services, 16 env vars, healthchecks on all
βββ otel-collector.yaml # Collector config: 6 receivers, 3 processors, 3 exporters, 5 pipelines
βββ uptrace.yml # Uptrace server config: ClickHouse schema, storage policies, seed data
βββ .env # Your secrets (not committed β see .env.example below)
βββ README.md
git clone https://github.com/thumbrise/uptrace-template-basic.git
cd uptrace-template-basiccat > .env << 'EOF'
# Uptrace
UPTRACE_SECRET=your-secret-key-change-me
UPTRACE_HTTP=14318
UPTRACE_GRPC=14317
UPTRACE_ADMIN_EMAIL=admin@example.com
UPTRACE_ADMIN_PASSWORD=changeme
UPTRACE_SYSTEM_PROJECT_TOKEN=system_project_token_changeme
# ClickHouse
CLICKHOUSE_USER=uptrace
CLICKHOUSE_PASSWORD=uptrace
CLICKHOUSE_DATABASE=uptrace
# PostgreSQL
POSTGRESQL_USER=uptrace
POSTGRESQL_PASSWORD=uptrace
POSTGRESQL_DATABASE=uptrace
# Redis
REDIS_USERNAME=default
REDIS_PASSWORD=redispass
# OTel Collector ports
OTLP_COLLECTOR_GRPC=4317
OTLP_COLLECTOR_HTTP=4318
EOFdocker compose up -dAll 5 services have healthchecks. The OTel Collector waits for Uptrace to be healthy before starting.
http://localhost:14318
Login with the email and password from your .env.
Point any OpenTelemetry SDK at the collector:
| Protocol | Endpoint |
|---|---|
| gRPC | localhost:4317 |
| HTTP | localhost:4318 |
Set the uptrace-dsn header (or metadata key) to your project DSN:
http://<project_token>@localhost:14318?grpc=14317
β οΈ Important: The header key isuptrace-dsn, notX-OTLP-Token(which was used in older versions).
The OpenTelemetry SDK in any language can be configured entirely via environment variables β zero code changes needed. Add these to your app's .env, docker-compose.yml, Kubernetes manifest, or whatever you use:
# ββ OpenTelemetry SDK configuration ββββββββββββββββββββββββββββββββββ
OTEL_SERVICE_NAME=my-service
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
OTEL_EXPORTER_OTLP_HEADERS=uptrace-dsn=http://<PROJECT_TOKEN>@localhost:14318?grpc=14317
OTEL_RESOURCE_ATTRIBUTES=service.version=1.0.0,deployment.environment=productionReplace
<PROJECT_TOKEN>with the token from Uptrace UI β Project β Settings β DSN. If your app runs inside the same Docker network, replacelocalhostwith service names:OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol:4317and DSN host withuptrace.
services:
my-app:
image: my-app:latest
environment:
OTEL_SERVICE_NAME: my-app
OTEL_EXPORTER_OTLP_PROTOCOL: grpc
OTEL_EXPORTER_OTLP_ENDPOINT: http://otelcol:4317
OTEL_EXPORTER_OTLP_HEADERS: "uptrace-dsn=http://<PROJECT_TOKEN>@uptrace:14318?grpc=14317"
depends_on:
otelcol:
condition: service_healthyOTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318Your app (OTel SDK)
β
β OTEL_EXPORTER_OTLP_HEADERS sets the uptrace-dsn metadata
β on every gRPC/HTTP request to the collector
β
βΌ
OTel Collector (receivers.otlp, include_metadata: true)
β
β batch/clients reads uptrace-dsn from metadata
β β batches per-tenant (metadata_keys: [uptrace-dsn])
β
β headers_setter/clients forwards uptrace-dsn to Uptrace
β
βΌ
Uptrace (routes to the correct project by DSN token)
| Variable | Required | Example | What it does |
|---|---|---|---|
OTEL_SERVICE_NAME |
β | my-service |
Service name in traces/metrics |
OTEL_EXPORTER_OTLP_ENDPOINT |
β | http://localhost:4317 |
Collector address |
OTEL_EXPORTER_OTLP_HEADERS |
β | uptrace-dsn=http://token@... |
Auth header β routes to Uptrace project |
OTEL_EXPORTER_OTLP_PROTOCOL |
β | grpc or http/protobuf |
Transport protocol |
OTEL_RESOURCE_ATTRIBUTES |
β | service.version=1.0.0,... |
Extra resource attributes |
OTEL_TRACES_SAMPLER |
β | parentbased_traceidratio |
Sampling strategy |
OTEL_TRACES_SAMPLER_ARG |
β | 0.1 |
Sample 10% of traces |
OTEL_LOGS_EXPORTER |
β | otlp |
Enable log export (some SDKs) |
OTEL_METRICS_EXPORTER |
β | otlp |
Enable metrics export (some SDKs) |
That's it. No Uptrace SDK, no vendor lock-in. Standard OpenTelemetry env vars β standard OTLP protocol β this collector β Uptrace.
traces: otlp βββΊ batch/clients βββΊ otlp/clients
logs: otlp βββΊ batch/clients βββΊ otlp/clients
metrics/clients: otlp βββΊ batch/clients βββΊ otlp/clients
metrics/system-otlp: hostmetrics ββ
postgresql βββ€βΊ batch/system + resourcedetection βββΊ otlp/system
httpcheck ββββ€
redis ββββββββ
metrics/system-prom: prometheus/self βββΊ batch/system βββΊ prometheusremotewrite/system
| Parameter | Value | Notes |
|---|---|---|
| Compression | ZSTD(1) |
Good ratio, moderate CPU |
dial_timeout |
3s |
Connection establishment |
write_timeout |
5s |
Write operations |
max_retries |
3 |
Failed operation retries |
max_execution_time |
15s |
Query timeout |
| Storage policies | 8 Γ default |
Pre-wired for SSD/HDD/S3 tiering |
| Cluster mode | replicated: false, distributed: false |
Knobs ready for scaling |
| Variable | Used By | Description |
|---|---|---|
UPTRACE_SECRET |
uptrace | Server secret key |
UPTRACE_HTTP |
uptrace, otelcol | HTTP listen port |
UPTRACE_GRPC |
uptrace, otelcol | gRPC listen port |
UPTRACE_ADMIN_EMAIL |
uptrace | Super admin email (seed data) |
UPTRACE_ADMIN_PASSWORD |
uptrace | Super admin password (seed data) |
UPTRACE_SYSTEM_PROJECT_TOKEN |
uptrace, otelcol | System project DSN token |
CLICKHOUSE_USER |
uptrace, clickhouse | ClickHouse credentials |
CLICKHOUSE_PASSWORD |
uptrace, clickhouse | ClickHouse credentials |
CLICKHOUSE_DATABASE |
uptrace, clickhouse | ClickHouse database name |
POSTGRESQL_USER |
uptrace, otelcol, postgresql | PostgreSQL credentials |
POSTGRESQL_PASSWORD |
uptrace, otelcol, postgresql | PostgreSQL credentials |
POSTGRESQL_DATABASE |
uptrace, otelcol, postgresql | PostgreSQL database name |
REDIS_USERNAME |
uptrace | Redis ACL username |
REDIS_PASSWORD |
uptrace, otelcol | Redis password |
OTLP_COLLECTOR_GRPC |
otelcol | Collector gRPC port |
OTLP_COLLECTOR_HTTP |
otelcol | Collector HTTP port |
This is a template repository. Use it as a starting point for your observability infrastructure.