Skip to content

feat: Support host directory mounts (hostPath/PVC) via CLI for long-running sandboxes #500

@aiwh-cli

Description

@aiwh-cli

Problem

OpenShell sandboxes currently have no mechanism for live bidirectional access to host directories. File transfer is limited to one-time --upload / sandbox upload (tar-over-SSH copies). For services that continuously read/write host data (databases, session logs, config files), sandbox-internal copies go stale immediately.

This prevents using OpenShell's security enforcement (Landlock, seccomp, network policies) for persistent agent runtimes — the primary production use case for OpenClaw deployments.

Proposed Design

1. New --mount CLI flag

Add a --mount flag to sandbox create accepting host:container[:mode] triples:

openshell sandbox create \
    --from ./Dockerfile \
    --policy ./policy.yaml \
    --mount /opt/data/config:/sandbox/config:ro \
    --mount /opt/data/client:/sandbox/client:rw \
    --forward 18789 \
    -- node /opt/app/dist/index.js

2. CLI parsing in run.rs

The sandbox_create function in crates/openshell-cli/src/run.rs builds a CreateSandboxRequest with a SandboxSpec. Add a mounts field:

// New CLI argument
#[arg(long, value_name = "HOST:CONTAINER[:MODE]")]
mount: Vec<String>,

// Parsed into structured mount specs
struct MountSpec {
    host_path: String,
    container_path: String,
    read_only: bool,  // default true unless `:rw` specified
}

3. Server-side pod spec injection

crates/openshell-server/src/sandbox/mod.rs already has apply_supervisor_sideload() which injects hostPath volumes into pod specs. The same pattern applies:

// Existing pattern (supervisor binary mount):
// volume: hostPath /opt/openshell/bin -> /opt/openshell/bin (ro, DirectoryOrCreate)

// New pattern (user-requested mounts):
// For each --mount flag, inject:
//   - A Volume with hostPath source
//   - A VolumeMount on the sandbox container
//   - read_only based on mode flag

The SandboxSpec already has a volume_claim_templates field that gets serialized to K8s resources — user mounts follow the same serialization path.

4. Policy integration

Mounted paths should respect filesystem_policy. A mount at /sandbox/client:rw still requires /sandbox/client in the policy's read_write list. Landlock enforcement applies on top of the mount — the mount makes the data available, the policy controls what the sandbox process can do with it.

This means mounts without matching policy entries are silently inaccessible (deny-by-default preserved).

5. K3s hostPath handling

The gateway's K3s cluster runs inside a Docker container. For hostPath volumes to reach the actual host filesystem, the gateway container itself needs the host path mounted. scripts/cluster-entrypoint.sh (or equivalent bootstrap) would need to propagate --mount paths as Docker bind mounts on the gateway container.

Alternatives Considered

  • --upload / --sync at creation time: One-time copy, immediately stale for databases and session logs that change every minute. Not suitable for live services.
  • Periodic sync via cron inside sandbox: Adds complexity, introduces sync lag, doesn't work for SQLite databases (WAL corruption on copy-during-write).
  • NFS/FUSE mount inside sandbox: Requires additional infrastructure, breaks Landlock enforcement model, adds attack surface.
  • Run service outside OpenShell: Loses all security enforcement (Landlock, seccomp, network policies). This is what users are forced to do today.
  • inotify + rsync daemon: Complex, fragile, doesn't handle bidirectional writes, latency on large files.

Agent Investigation

Codebase paths examined:

  • crates/openshell-cli/src/run.rssandbox_create() builds CreateSandboxRequest with SandboxSpec { gpu, policy, providers, template }. No mount/volume field currently.
  • crates/openshell-server/src/sandbox/mod.rsapply_supervisor_sideload() already injects hostPath volumes + volumeMounts into pod specs. Pattern exists and works.
  • crates/openshell-server/src/sandbox/mod.rsSandboxSpec has volume_claim_templates field, serialized to K8s JSON. Infrastructure for volume specs exists.
  • deploy/kube/manifests/agent-sandbox.yaml — CRD defines volumeMounts properties in container spec (truncated in YAML but schema allows it).
  • examples/sync-files.md — Documents tar-over-SSH upload/download. Confirms no live mount mechanism exists.
  • examples/bring-your-own-container/README.md — Custom Dockerfile support works, but no mount flags documented.

Use Case

Running an OpenClaw agent gateway (19 agents, continuous cron jobs) that needs live read/write access to:

  • Agent config and session state (.openclaw/)
  • Client databases and content (client/)
  • Product code as read-only reference (core/)

Currently forced to use plain Docker without OpenShell's security enforcement because the sandbox cannot access live host data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions