diff --git a/docs.json b/docs.json
index 9269398f..ad779afa 100644
--- a/docs.json
+++ b/docs.json
@@ -286,11 +286,17 @@
"group": "Remote Agent Server",
"pages": [
"sdk/guides/agent-server/overview",
- "sdk/guides/agent-server/local-server",
- "sdk/guides/agent-server/docker-sandbox",
- "sdk/guides/agent-server/apptainer-sandbox",
- "sdk/guides/agent-server/api-sandbox",
- "sdk/guides/agent-server/cloud-workspace",
+ {
+ "group": "Workspaces",
+ "pages": [
+ "sdk/guides/agent-server/workspace-types",
+ "sdk/guides/agent-server/local-server",
+ "sdk/guides/agent-server/docker-sandbox",
+ "sdk/guides/agent-server/apptainer-sandbox",
+ "sdk/guides/agent-server/api-sandbox",
+ "sdk/guides/agent-server/cloud-workspace"
+ ]
+ },
"sdk/guides/agent-server/custom-tools",
{
"group": "API Reference",
@@ -450,7 +456,8 @@
{
"tag": "link",
"attributes": {
- "rel": "stylesheet"
+ "rel": "stylesheet",
+ "href": "/style.css"
}
}
],
diff --git a/sdk/guides/agent-server/cloud-workspace.mdx b/sdk/guides/agent-server/cloud-workspace.mdx
index 43cc07e9..50b6888e 100644
--- a/sdk/guides/agent-server/cloud-workspace.mdx
+++ b/sdk/guides/agent-server/cloud-workspace.mdx
@@ -383,6 +383,121 @@ cd agent-sdk
uv run python examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py
```
+## Local Agent Server Mode
+
+Use `local_agent_server_mode=True` when your SDK script is **already running inside** an OpenHands Cloud sandbox — for example, as part of an automation workflow deployed to the cloud.
+
+### When to Use This Mode
+
+| Scenario | Normal Mode | Local Agent Server Mode |
+|----------|-------------|-------------------------|
+| Script runs on your local machine | ✅ | ❌ |
+| Script runs in CI (GitHub Actions runner) | ✅ | ❌ |
+| Script deployed to run inside Cloud sandbox | ❌ | ✅ |
+| Automation service executes your script | ❌ | ✅ |
+
+### How It Differs from Normal Mode
+
+In **normal mode**, `OpenHandsCloudWorkspace` provisions a new sandbox via the Cloud API:
+
+
+
+
+
+In **local agent server mode**, your script is already inside the sandbox and connects to the local agent-server:
+
+
+
+
+
+Key differences:
+- **No sandbox provisioning** — skips create/wait/delete lifecycle
+- **Connects to localhost** — talks to the agent-server already running in the sandbox
+- **Cloud credentials still work** — `get_llm()` and `get_secrets()` call the Cloud API
+
+### Configuration
+
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `local_agent_server_mode` | `bool` | `False` | Skip sandbox provisioning, connect to localhost |
+| `agent_server_port` | `int` | `60000` | Port of the local agent-server |
+
+### Environment Variables
+
+When running inside a Cloud sandbox, these environment variables are set automatically:
+
+| Variable | Description |
+|----------|-------------|
+| `SANDBOX_ID` | Sandbox identifier for `get_llm()` / `get_secrets()` |
+| `SESSION_API_KEY` | Session auth key (fallback: `OH_SESSION_API_KEYS_0`) |
+| `AGENT_SERVER_PORT` | Port override (optional) |
+| `AUTOMATION_CALLBACK_URL` | URL to POST completion status on exit (optional) |
+| `AUTOMATION_RUN_ID` | ID included in callback payload (optional) |
+
+### Example: Automation Script Inside a Cloud Sandbox
+
+This script is designed to be uploaded and executed inside an OpenHands Cloud sandbox:
+
+```python icon="python"
+# my_automation.py — runs INSIDE a Cloud sandbox
+import os
+from openhands.workspace import OpenHandsCloudWorkspace
+from openhands.sdk import Conversation
+from openhands.tools.preset.default import get_default_agent
+
+with OpenHandsCloudWorkspace(
+ local_agent_server_mode=True,
+ cloud_api_url="https://app.all-hands.dev",
+ cloud_api_key=os.environ["OPENHANDS_API_KEY"],
+) as workspace:
+ # No sandbox created — connects to local agent-server at localhost:60000
+
+ # Cloud credentials still work
+ llm = workspace.get_llm()
+ secrets = workspace.get_secrets()
+
+ agent = get_default_agent(llm=llm, cli_mode=True)
+ conversation = Conversation(agent=agent, workspace=workspace)
+
+ if secrets:
+ conversation.update_secrets(secrets)
+
+ conversation.send_message("Perform the automation task")
+ conversation.run()
+ conversation.close()
+
+# On exit: completion callback sent automatically (if AUTOMATION_CALLBACK_URL is set)
+```
+
+### Orchestration Pattern
+
+To deploy an automation script that uses local agent server mode:
+
+1. **Create a sandbox** using normal mode (from your local machine or CI):
+ ```python
+ with OpenHandsCloudWorkspace(
+ cloud_api_url="https://app.all-hands.dev",
+ cloud_api_key=api_key,
+ keep_alive=True, # Don't delete after setup
+ ) as workspace:
+ workspace.file_upload("my_automation.py", "/workspace/my_automation.py")
+ ```
+
+2. **Execute the script** inside the sandbox:
+ ```python
+ workspace.execute_command("python /workspace/my_automation.py")
+ ```
+
+3. The script uses `local_agent_server_mode=True` to connect to the local agent-server
+
+4. **Receive callback** when the script completes (optional)
+
+This pattern enables fire-and-forget automation where your orchestrator doesn't need to maintain a connection for the entire agent session.
+
+
+Local agent server mode is primarily used by the OpenHands automation service. For most SDK users, normal mode with `get_llm()` and `get_secrets()` provides a simpler experience.
+
+
## Next Steps
- **[API-based Sandbox](/sdk/guides/agent-server/api-sandbox)** - Connect to Runtime API service
diff --git a/sdk/guides/agent-server/images/local-agent-server-mode-simple.svg b/sdk/guides/agent-server/images/local-agent-server-mode-simple.svg
new file mode 100644
index 00000000..7ac94d3b
--- /dev/null
+++ b/sdk/guides/agent-server/images/local-agent-server-mode-simple.svg
@@ -0,0 +1,17 @@
+
diff --git a/sdk/guides/agent-server/images/local-agent-server-mode.svg b/sdk/guides/agent-server/images/local-agent-server-mode.svg
new file mode 100644
index 00000000..517afb7a
--- /dev/null
+++ b/sdk/guides/agent-server/images/local-agent-server-mode.svg
@@ -0,0 +1,35 @@
+
diff --git a/sdk/guides/agent-server/images/normal-mode.svg b/sdk/guides/agent-server/images/normal-mode.svg
new file mode 100644
index 00000000..395a992c
--- /dev/null
+++ b/sdk/guides/agent-server/images/normal-mode.svg
@@ -0,0 +1,25 @@
+
diff --git a/sdk/guides/agent-server/workspace-types.mdx b/sdk/guides/agent-server/workspace-types.mdx
new file mode 100644
index 00000000..bb619548
--- /dev/null
+++ b/sdk/guides/agent-server/workspace-types.mdx
@@ -0,0 +1,188 @@
+---
+title: Overview
+description: Choose the right workspace for your use case — from local development to production integrations.
+---
+
+The SDK supports multiple workspace types. All share the same API — switching between them requires only changing the workspace argument to `Conversation`.
+
+## Local Scenarios
+
+Use these when you're developing on your own machine and want the agent to run locally.
+
+### Development and Testing
+
+For the fastest iteration cycle, use a simple path string. The agent runs in your Python process with direct filesystem access:
+
+```python
+conversation = Conversation(agent=agent, workspace="./my-project")
+```
+
+**Best for:** Rapid prototyping, debugging agent behavior, learning the SDK.
+
+**Trade-off:** No isolation — the agent can access your entire filesystem and network.
+
+### Local Development with Isolation
+
+When you need isolation but still want to run locally, use [DockerWorkspace](/sdk/guides/agent-server/docker-sandbox):
+
+```python
+from openhands.workspace import DockerWorkspace
+
+with DockerWorkspace(
+ server_image="ghcr.io/openhands/agent-server:latest-python",
+) as workspace:
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+**Best for:** Testing agent behavior safely, verifying agents work in a sandboxed environment before deployment.
+
+**Requirements:** Docker installed locally.
+
+
+For HPC environments using Singularity/Apptainer instead of Docker, see [ApptainerWorkspace](/sdk/guides/agent-server/apptainer-sandbox).
+
+
+---
+
+## Remote & Integration Scenarios
+
+Use these when building applications, integrating with CI/CD, or deploying agents to production.
+
+### Building Applications with OpenHands Cloud
+
+When you're building an application that uses OpenHands agents, [OpenHandsCloudWorkspace](/sdk/guides/agent-server/cloud-workspace) provides fully managed infrastructure:
+
+```python
+from openhands.workspace import OpenHandsCloudWorkspace
+
+with OpenHandsCloudWorkspace(
+ cloud_api_url="https://app.all-hands.dev",
+ cloud_api_key=os.environ["OPENHANDS_CLOUD_API_KEY"],
+) as workspace:
+ llm = workspace.get_llm() # Inherit LLM config from your Cloud account
+ secrets = workspace.get_secrets() # Inject secrets without exposing them
+
+ agent = get_default_agent(llm=llm)
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+**Best for:** Production applications, SaaS integrations, teams that don't want to manage infrastructure.
+
+**What you get:**
+- Managed sandbox provisioning and lifecycle
+- LLM configuration inherited from your Cloud account (no API keys in your code)
+- Secrets injected securely without transiting through your application
+- No infrastructure to manage
+
+### CI/CD Pipeline Integration
+
+For running agents in CI/CD pipelines (GitHub Actions, GitLab CI, etc.), you have two options:
+
+**Option A: DockerWorkspace** — Run the sandbox on the CI runner itself:
+
+```python
+# In your CI script
+with DockerWorkspace(...) as workspace:
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+**Option B: OpenHandsCloudWorkspace** — Offload execution to OpenHands Cloud:
+
+```python
+# In your CI script
+with OpenHandsCloudWorkspace(
+ cloud_api_url="https://app.all-hands.dev",
+ cloud_api_key=os.environ["OPENHANDS_CLOUD_API_KEY"],
+) as workspace:
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+| Consideration | DockerWorkspace | OpenHandsCloudWorkspace |
+|---------------|-----------------|-------------------------|
+| Runner requirements | Docker-in-Docker or privileged | None (API calls only) |
+| Resource usage | Consumes runner resources | Offloaded to Cloud |
+| Secrets management | You manage | Inherited from Cloud account |
+| Setup complexity | Higher | Lower |
+
+### Running SDK Scripts Inside Cloud Sandboxes
+
+For advanced orchestration, you may want to run SDK scripts *inside* a Cloud sandbox rather than from outside. This pattern is useful when:
+
+- You want **fire-and-forget execution** — your orchestrator doesn't maintain a connection for the entire agent session
+- You need **nested agent execution** — an outer agent spawns inner agents
+- You're building an **automation service** that deploys user-provided scripts
+
+This uses `local_agent_server_mode=True`. See [Local Agent Server Mode](/sdk/guides/agent-server/cloud-workspace#local-agent-server-mode) for the full pattern.
+
+
+
+
+
+### Enterprise: Self-Managed Infrastructure
+
+If you're running OpenHands Enterprise and need low-level control over runtime allocation, use [APIRemoteWorkspace](/sdk/guides/agent-server/api-sandbox):
+
+```python
+from openhands.workspace import APIRemoteWorkspace
+
+with APIRemoteWorkspace(
+ runtime_api_url="https://runtime.example.com",
+ runtime_api_key=os.environ["RUNTIME_API_KEY"],
+ server_image="ghcr.io/openhands/agent-server:latest-python",
+) as workspace:
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+**Best for:** Organizations that need fine-grained resource management, custom container images, or must run on their own infrastructure.
+
+
+With `APIRemoteWorkspace`, you are responsible for:
+- Managing Runtime API credentials and access
+- Container image selection and updates
+- Resource allocation and scaling decisions
+- LLM and secret configuration (no SaaS credential inheritance)
+
+For most use cases, `OpenHandsCloudWorkspace` provides a simpler experience.
+
+
+---
+
+## Quick Reference
+
+| Workspace | Best For | Infrastructure | Isolated | SaaS |
+|-----------|----------|----------------|----------|------|
+| [LocalWorkspace](/sdk/guides/agent-server/local-server) | Development, testing | None | ❌ | ❌ |
+| [DockerWorkspace](/sdk/guides/agent-server/docker-sandbox) | Local isolation, CI/CD | Local Docker | ✅ | ❌ |
+| [ApptainerWorkspace](/sdk/guides/agent-server/apptainer-sandbox) | HPC, shared compute | Singularity | ✅ | ❌ |
+| [OpenHandsCloudWorkspace](/sdk/guides/agent-server/cloud-workspace) | Production, managed | OpenHands Cloud | ✅ | ✅ |
+| [APIRemoteWorkspace](/sdk/guides/agent-server/api-sandbox) | Enterprise, low-level control | Runtime API | ✅ | ❌ |
+
+## How Workspaces Relate to Conversations
+
+The `Conversation` factory automatically selects the appropriate implementation:
+
+| Workspace Type | Conversation Type | Where Agent Runs |
+|----------------|-------------------|------------------|
+| Path / `LocalWorkspace` | `LocalConversation` | Your Python process |
+| Any `RemoteWorkspace` | `RemoteConversation` | On the agent server |
+
+```python
+# LocalConversation (agent runs in your process)
+conversation = Conversation(agent=agent, workspace="./project")
+
+# RemoteConversation (agent runs on agent server)
+with DockerWorkspace(...) as workspace:
+ conversation = Conversation(agent=agent, workspace=workspace)
+```
+
+## Feature Comparison
+
+| Feature | Local | Docker | Cloud | API | Apptainer |
+|---------|-------|--------|-------|-----|-----------|
+| No setup required | ✅ | Docker needed | ✅ | Runtime API access | Apptainer needed |
+| File isolation | ❌ | ✅ | ✅ | ✅ | ✅ |
+| Network isolation | ❌ | ✅ | ✅ | ✅ | ✅ |
+| `get_llm()` | ❌ | ❌ | ✅ | ❌ | ❌ |
+| `get_secrets()` | ❌ | ❌ | ✅ | ❌ | ❌ |
+| Pause/Resume | ❌ | ❌ | ❌ | ✅ | ❌ |
+| Custom images | N/A | ✅ | Via specs | ✅ | ✅ |
diff --git a/style.css b/style.css
index b5da77d4..9e56dc93 100644
--- a/style.css
+++ b/style.css
@@ -2,3 +2,10 @@
/* Default banner showed white text on yellow in our theme */
color: rgb(67, 66, 64);
}
+
+/* Reduce minimum column width for tables to avoid horizontal scroll */
+table td,
+table th {
+ min-width: auto;
+ white-space: normal;
+}