Open
Conversation
Update dependency versions to latest stable releases: - kubectl: 1.31.0 → 1.35.0 - helm: 3.19.1 → 3.19.4 - helmfile: 1.2.2 → 1.2.3 - k9s: 0.32.5 → 0.50.18 - helm-diff: 3.9.11 → 3.14.1 k3d remains at 5.8.3 (already current).
Replace nginx-ingress controller with Traefik 38.0.2 using Kubernetes Gateway API for routing. This addresses the nginx-ingress deprecation (end of maintenance March 2026). Changes: - Remove --disable=traefik from k3d config to use k3s built-in Traefik - Replace nginx-ingress helm release with Traefik 38.0.2 in infrastructure - Configure Gateway API provider with cross-namespace routing support - Add GatewayClass and Gateway resources via Traefik helm chart - Convert all Ingress resources to HTTPRoute format: - eRPC: /rpc path routing - obol-frontend: / path routing - ethereum: /execution and /beacon path routing with URL rewrite - aztec: namespace-based path routing with URL rewrite - helios: namespace-based path routing with URL rewrite - Disable legacy Ingress in service helm values Closes #125
Add Cloudflare Tunnel integration to expose obol-stack services publicly without port forwarding or static IPs. Uses quick tunnel mode for MVP. Changes: - Add cloudflared Helm chart (internal/embed/infrastructure/cloudflared/) - Add tunnel management package (internal/tunnel/) - Add CLI commands: obol tunnel status/restart/logs - Integrate cloudflared into infrastructure helmfile The tunnel deploys automatically with `obol stack up` and provides a random trycloudflare.com URL accessible via `obol tunnel status`. Future: Named tunnel support for persistent URLs (obol tunnel login)
Update documentation to reflect the upgraded dependency versions in obolup.sh. This keeps the documentation in sync with the actual pinned versions used by the bootstrap installer.
# Conflicts: # internal/embed/infrastructure/helmfile.yaml
Introduce the inference marketplace foundation: an x402-enabled reverse proxy that wraps any OpenAI-compatible inference service with USDC micropayments via the x402 protocol. Components: - internal/inference/gateway.go: net/http reverse proxy with x402 middleware - cmd/inference-gateway/: standalone binary for containerisation - cmd/obol/inference.go: `obol inference serve` CLI command - internal/embed/networks/inference/: helmfile network template deploying Ollama + gateway + HTTPRoute (auto-discovered by existing CLI) - Dockerfile.inference-gateway: distroless multi-stage build Provider: obol network install inference --wallet-address 0x... --model llama3.2:3b Consumer: POST /v1/chat/completions with X-PAYMENT header (USDC on Base)
feat(inference): add x402 pay-per-inference gateway (Phase 1)
- Remove unused $publicDomain variable from helmfile.yaml (caused Helmfile v1 gotmpl pre-processing to fail on .Values.* references) - Fix eRPC secretEnv: chart expects plain strings, not secretKeyRef maps; move OBOL_OAUTH_TOKEN to extraEnv with valueFrom - Fix obol-frontend escaped quotes in gotmpl (invalid \\" in operand)
Replace the in-cluster Ollama Deployment/PVC/Service with an
ExternalName Service that routes ollama.llm.svc.cluster.local to the
host machine's Ollama server. LLMSpy and all consumers use the stable
cluster-internal DNS name; the ExternalName target is resolved during
stack init via the {{OLLAMA_HOST}} placeholder:
k3d → host.k3d.internal
k3s → node gateway IP (future)
This avoids duplicating the model cache inside the cluster and
leverages the host's GPU/VRAM for inference.
Also updates CopyDefaults to accept a replacements map, following
the same pattern used for k3d.yaml placeholder resolution.
refactor(llm): proxy to host Ollama via ExternalName Service
The obol-agent deployment in the agent namespace fails with ImagePullBackOff because its container image is not publicly accessible. Wrap the template in a Helm conditional (obolAgent.enabled) defaulting to false so it no longer deploys automatically. The manifest is preserved for future use — set obolAgent.enabled=true in the base chart values to re-enable.
fix(infra): disable obol-agent from default stack deployment
Add GitHub Actions workflow to build and publish the OpenClaw container image to ghcr.io/obolnetwork/openclaw from the upstream openclaw/openclaw repo at a pinned version. Renovate watches for new upstream releases and auto-opens PRs to bump the version file. Closes #142
Add integration-okr-1 and feat/openclaw-ci to push triggers for testing. Remove after verifying the workflow runs successfully — limit to main only.
The pinned SHAs from charon-dkg-sidecar were stale and caused the security-scan job to fail at setup.
Sync _helpers.tpl, validate.yaml, and values.yaml comments to match the helm-charts repo. Key changes: - Remove randAlphaNum gateway token fallback (require explicit value) - Add validation: gateway token required for token auth mode - Add validation: RBAC requires serviceAccount.name when create=false - Add validation: initJob requires persistence.enabled=true - Align provider and gateway token comments
Add a local dnsmasq-based DNS resolver that enables wildcard hostname resolution for per-instance routing (e.g., openclaw-myid.obol.stack) without manual /etc/hosts entries. - New internal/dns package: manages dnsmasq Docker container on port 5553 - macOS: auto-configures /etc/resolver/obol.stack (requires sudo once) - Linux: prints manual DNS configuration instructions - stack up: starts DNS resolver (idempotent, non-fatal on failure) - stack purge: stops DNS resolver and removes system resolver config - stack down: leaves DNS resolver running (cheap, persists across restarts) Closes #150
DNS resolver: add systemd-resolved integration for Linux. On Linux, dnsmasq binds to 127.0.0.2:53 (avoids systemd-resolved's stub on 127.0.0.53:53) and a resolved.conf.d drop-in forwards *.obol.stack queries. On macOS, behavior is unchanged (port 5553 + /etc/resolver). Also fixes dnsmasq startup with --conf-file=/dev/null to ignore Alpine's default config which enables local-service (rejects queries from Docker bridge network). Fix llmspy image tag: 3.0.32-obol.1-rc.2 does not exist on GHCR, corrected to 3.0.32-obol.1-rc.1.
…Helm repo (#145) Switch from bundling the OpenClaw Helm chart in the Go binary via //go:embed to referencing obol/openclaw from the published Helm repo, matching the pattern used by Helios and Aztec networks. Changes: - generateHelmfile() now emits chart: obol/openclaw with version pin - Remove copyEmbeddedChart() and all chart/values.yaml copy logic - Remove //go:embed directive, chartFS variable, and embed/io/fs imports - Delete internal/openclaw/chart/ (chart lives in helm-charts repo) - Deployment directory simplified to helmfile.yaml + values-obol.yaml - Setup() regenerates helmfile on each run to pick up version bumps Depends on helm-charts PR #183 being merged and chart published.
Helios is no longer part of the Obol Stack network lineup. Remove the embedded network definition, frontend env var, and all documentation references.
Add comprehensive unit tests for the OpenClaw config import pipeline (25 test cases covering DetectExistingConfig, TranslateToOverlayYAML, workspace detection, and helper functions). Refactor DetectExistingConfig for testability by extracting detectExistingConfigAt(home). Fix silent failures: warn when env-var API keys are skipped, when unknown API types are sanitized, when workspace has no marker files, and when DetectExistingConfig returns an error.
…153) OpenClaw's control UI rejects WebSocket connections with "1008: control ui requires HTTPS or localhost (secure context)" when running behind Traefik over HTTP. This adds: - Chart values and _helpers.tpl rendering for controlUi.allowInsecureAuth and controlUi.dangerouslyDisableDeviceAuth gateway settings - trustedProxies chart value for reverse proxy IP allowlisting - Overlay generation injects controlUi settings for both imported and fresh install paths - RBAC ClusterRole/ClusterRoleBinding for frontend OpenClaw instance discovery (namespaces, pods, configmaps, secrets)
…cloud model routing OpenClaw requires provider/model format (e.g. "llmspy/claude-sonnet-4-5-20250929") for model resolution. Without a provider prefix, it hardcodes a fallback to the "anthropic" provider — which is disabled in the llmspy-routed overlay, causing chat requests to fail silently. This renames the virtual provider used for cloud model routing from "ollama" to "llmspy", adds the proper provider prefix to AgentModel, and disables the default "ollama" provider when a cloud provider is selected. The default Ollama-only path is unchanged since it genuinely routes Ollama models.
fix(openclaw): rename virtual provider to llmspy for cloud model routing
…lowInsecureAuth The dangerouslyDisableDeviceAuth flag is completely redundant when running behind Traefik over HTTP: the browser's crypto.subtle API is unavailable in non-secure contexts (non-localhost HTTP), so the Control UI never sends device identity at all. Setting dangerouslyDisableDeviceAuth only matters when the browser IS in a secure context but you want to skip device auth — which doesn't apply to our Traefik proxy case. allowInsecureAuth alone is sufficient: it allows the gateway to accept token-only authentication when device identity is absent. Token auth remains fully enforced — connections without a valid gateway token are still rejected. Security analysis: - Token/password auth: still enforced (timing-safe comparison) - Origin check: still enforced (same-origin validation) - Device identity: naturally skipped (browser can't provide it on HTTP) - Risk in localhost k3d context: Low (no external attack surface) - OpenClaw security audit classification: critical (general), but acceptable for local-only dev stack Refs: plans/security-audit-controlui.md, plans/trustedproxies-analysis.md
Includes smart routing, streaming SSE passthrough, and db writer startup race fix.
Resolve modify/delete conflicts on embedded OpenClaw chart files: - internal/openclaw/chart/templates/_helpers.tpl - internal/openclaw/chart/values.yaml Accept deletions — chart was replaced with remote Helm repo in ca835f5 (refactor(openclaw): replace embedded chart with remote obol/openclaw Helm repo #145).
feat(dns): add wildcard DNS resolver for *.obol.stack
Remove the p.APIKey value from the env-var reference log message in
DetectExistingConfig(). Although the code path only reaches here when
the value is an env-var reference (e.g. ${ANTHROPIC_API_KEY}), CodeQL
correctly flags it as clear-text logging of a sensitive field (go/
clear-text-logging). Omitting the value is a defense-in-depth fix that
prevents accidental exposure if the guard condition ever changes.
Resolve conflict in obol-frontend values: accept pinned v0.1.4 tag from main.
Replace the nodecore RPC upstream with Obol's internal rate-limited eRPC gateway (erpc.gcp.obol.tech). The upstream supports mainnet and hoodi only, so sepolia is removed from all eRPC and ethereum network configurations. Basic Auth credential is intentionally embedded per CTO approval — the endpoint is rate-limited and serves as a convenience proxy for local stack users. Credential is extracted to a template variable with gitleaks:allow suppression.
feat(erpc): switch upstream to erpc.gcp.obol.tech
Resolve conflict: keep rc.4 LLMSpy image tag.
chore(llm): bump LLMSpy to Obol fork rc.4
Replace all references to glm-4.7-flash with Ollama's cloud model gpt-oss:120b-cloud. Cloud models run on Ollama's cloud service, eliminating OOM risk on local machines.
…ibility The remote OpenClaw Helm chart only iterates hardcoded provider names (ollama, anthropic, openai). Using "llmspy" as the virtual provider name caused it to be silently dropped from the rendered config, breaking the Anthropic inference waterfall. Revert to using "ollama" as the provider name — it still points at llmspy's URL (http://llmspy.llm.svc.cluster.local:8000/v1) with api: openai-completions, so all routing works correctly. Found during pre-production validation.
Replace busybox init container with the llmspy image itself, using a
Python merge script that:
1. Copies llms.json from ConfigMap (controls enabled/disabled state)
2. Loads the full providers.json from the llmspy package (has model
definitions and npm package refs for Anthropic/OpenAI)
3. Merges ConfigMap overrides (Ollama endpoint, API key refs)
Also remove "models": {} and "all_models": true from cloud providers
in the ConfigMap — these crash llmspy since only Ollama has a
load_models() implementation. Add "npm" field for Anthropic/OpenAI.
Found during pre-production Anthropic integration validation.
Collaborator
Pre-Production Validation ReportFull validation conducted on 1. Build & Static Analysis
2. Model Configuration
3. Cluster Health
4. DNS Resolution
Note: 5. Gateway API & HTTPRoutes
6. Ollama Cloud Inference
7. OpenClaw Inference Waterfall
8. Frontend & eRPC Endpoints
9. Anthropic Integration (Global + OpenClaw)Global llmspy Configuration
llmspy → Anthropic Routing
OpenClaw → llmspy → Anthropic
10. Import Existing OpenClaw Config
11. Security (CodeQL)
Bugs Found & Fixed During Validation
Known Limitations
Documentation18 files updated + 1 new page ( |
When Docker is installed but the daemon isn't running, obolup now attempts to start it automatically: 1. Try systemd (apt/yum installs): sudo systemctl start docker 2. Try snap: sudo snap start docker If auto-start fails, the error message now shows both systemd and snap commands instead of only systemctl. Fixes Docker startup on Ubuntu with snap-installed Docker where systemctl start docker fails with "Unit docker.service not found".
- HasAPIKey: name == "ollama" — explain why Ollama is always "has key" - collectSensitiveData — document in-place mutation contract - promptForCustomProvider — explain why custom endpoints use "openai" slot - Default Ollama path — explain why apiKeyValue is safe to inline
Improve global + per-instance LLM UX and secret handling
CodeQL flagged ProviderStatus.APIKeyEnv as sensitive data being logged. The field only stores the env var name (e.g. "ANTHROPIC_API_KEY"), not the actual key. Rename to EnvVar to avoid triggering the heuristic.
fix(llm): rename APIKeyEnv to EnvVar (CodeQL fix)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.