SuperCodeMode is a Python CLI and demo harness for optimizing Code Mode style client behavior in MCP workflows with GEPA.
It focuses on improving the text and routing policy around a small tool surface (typically discovery + execution), so agents make better tool choices and produce more reliable results.
Many tool systems fail because the client logic is weak even when the tools are good.
Typical failures:
- execution tool is used too early
- discovery step is skipped
- execution instructions are vague
- final answers are noisy or inconsistent
SuperCodeMode gives you a repeatable GEPA-driven optimization loop to improve those behaviors.
- Cloudflare Code Mode MCP users
- MCP users running discovery + execution style tool patterns
- platform engineers and evaluation teams
- teams experimenting with Code Mode style agent behavior before changing server code
- MCP stdio runner for local workflows
- MCP streamable HTTP runner for direct Cloudflare MCP
- HTTP bridge runner for custom runtime bridges
- local, Docker, and Monty execution backends in the demo MCP server
scm doctorpreflight checks- artifact saving for showcase/optimization runs
- observability output (JSONL and OTLP)
Code Mode here means a code-first MCP orchestration pattern where the model uses a small tool surface and generates code for multi-step work.
Background:
- Cloudflare Code Mode MCP blog: https://blog.cloudflare.com/code-mode-mcp/
- UTCP Code Mode implementation: https://github.com/universal-tool-calling-protocol/code-mode
From PyPI:
pip install supercodemodeWith uv (tool install, recommended for CLI usage):
uv tool install supercodemodeWith uv (current environment):
uv pip install supercodemodeOptional Monty executor backend:
pip install "supercodemode[monty]"With uv:
uv pip install "supercodemode[monty]"Optional observability integrations (LangSmith, Logfire, MLflow, Langfuse):
pip install "supercodemode[observability]"With uv:
uv pip install "supercodemode[observability]"Then verify install:
scm --helpFor local development:
pip install -e .With uv:
uv pip install -e .Check your environment:
scm doctorRun a Cloudflare MCP showcase (defaults to https://mcp.cloudflare.com/mcp):
scm showcase --runner mcp-httpRun a local MCP showcase (demo server over stdio):
scm showcase --runner mcp-stdioIf Cloudflare MCP requires auth in your environment:
scm showcase --runner mcp-http --auth-bearer "$CODEMODE_TOKEN"SuperCodeMode uses GEPA to optimize Code Mode client-side text such as:
- system prompt text
- Code Mode description / routing guidance
- tool alias mappings
- tool description overrides
This improves client behavior without requiring server/runtime code changes.
SuperCodeMode demonstrates a GEPA-centric adapter approach where:
- GEPA optimizes client text policy
- runners execute tools on MCP or HTTP runtimes
- the same optimization logic can be reused across local and remote transports
This keeps GEPA optimization logic separate from runtime transport details.
scm doctor
scm doctor --json
scm doctor --strictscm showcase --runner mcp-http
scm showcase --runner mcp-stdio
scm showcase --runner mcp-stdio --executor-backend monty
scm showcase --runner mcp-stdio --executor-backend docker
scm showcase --runner http --endpoint http://localhost:8080/run-codemodeNote: showcase is an active CLI command. The removed showcase/ directory was
an older repo layout, not the scm showcase command.
scm optimize --runner mcp-http --max-metric-calls 10
scm optimize --runner mcp-stdio --max-metric-calls 10
scm optimize --runner mcp-stdio --executor-backend monty --max-metric-calls 10
scm optimize --runner mcp-stdio --executor-backend docker --max-metric-calls 10
scm optimize --runner http --endpoint http://localhost:8080/run-codemode --max-metric-calls 10Save artifacts:
scm showcase --runner mcp-stdio --save-artifact
scm optimize --runner mcp-http --max-metric-calls 10 --save-artifactWhen --save-artifact is enabled, SuperCodeMode also writes compact summary files:
- showcase:
comparison_summary,baseline_run_summary,tuned_run_summary - optimize:
run_summary - benchmark:
benchmark_summary+ per-variantrun_summary
scm mcp-client
scm mcp-client --executor-backend monty
scm mcp-client --executor-backend dockerscm benchmark --runner mcp-stdio
scm benchmark --runner mcp-stdio --executor-backend monty
scm benchmark --runner mcp-httpThis compares three policy profiles on the same runner/dataset:
tool_call(naive execution-first policy)codemode_baselinecodemode_optimized
All runnable examples are under examples/.
Recommended starting points:
python examples/showcase_mcp_cloudflare.py
python examples/showcase_mcp_stdio.py
python examples/optimize_mcp_cloudflare.py --max-metric-calls 10
python examples/optimize_mcp_stdio.py --max-metric-calls 10Real LLM optimization demo (Gemini, low-cost settings):
export GOOGLE_API_KEY=your_key_here
python examples/optimize_gemini_flash.py --max-metric-calls 4Full example list:
- Examples README (GitHub): https://github.com/SuperagenticAI/supercodemode/blob/main/examples/README.md
- Examples guide (Docs): https://superagenticai.github.io/supercodemode/guides/examples/
mcp-httprunner defaults tohttps://mcp.cloudflare.com/mcp- Cloudflare MCP may require auth for your usage:
scm showcase --runner mcp-http --auth-bearer "$CODEMODE_TOKEN"- Demo scoring can show
0.5even when integration works if Cloudflare returns structured JSON forsearchand the metric expects a literal keyword match
In that case, the primary success signal is:
- case 1 selects
search - case 2 selects
execute - case 2 returns
42
Use Monty for a Python-native sandboxed execution path in demo MCP flows:
scm showcase --runner mcp-stdio --executor-backend montyRequirements:
- install
pydantic-monty(orpip install "supercodemode[monty]")
Use Docker for safer local code execution in demo MCP flows:
scm showcase --runner mcp-stdio --executor-backend dockerRequirements:
- Docker daemon running
- your user can run
docker run
JSONL:
scm --obs-backend jsonl --obs-jsonl-path artifacts/obs.jsonl showcase --runner mcp-stdioOTLP:
scm --obs-backend otlp --obs-otlp-endpoint http://localhost:4318/v1/traces showcase --runner mcp-stdioOptional SDK backends (same event schema, best-effort adapters):
scm --obs-backend logfire showcase --runner mcp-stdio
scm --obs-backend mlflow showcase --runner mcp-stdio
scm --obs-backend langsmith showcase --runner mcp-stdio
scm --obs-backend langfuse showcase --runner mcp-stdioInstall optional integrations:
pip install "supercodemode[observability]"Environment variables (alternative to CLI flags):
SCM_OBS_BACKEND=none|jsonl|otlp|logfire|mlflow|langsmith|langfuseSCM_OBS_JSONL_PATH=artifacts/obs.jsonlSCM_OBS_OTLP_ENDPOINT=http://localhost:4318/v1/tracesSCM_RUN_ID=demo-run-001SCM_OBS_DATASET_NAME=two_tool_dataset(optional)SCM_OBS_TAGS_JSON='{"env":"dev","team":"research"}'(optional)
Event payloads include GEPA/Code Mode run fields such as selected tool, tool call count, score, and error state, and the saved summary artifacts provide compact rollups for comparisons and quick benchmarking.
CLI commands also stamp command context into events (for example cli_command,
cli_runner, and cli_executor_backend) to make JSONL/OTLP filtering easier.
Benchmark and run summaries also include:
- runtime capability hints (for example local vs docker vs monty constraints)
- error taxonomy rollups (
error_categories) for quick failure analysis
This repo is the end-to-end GEPA optimization demo and experimentation harness for the GEPA Code Mode adapter work (examples, CLI, docs, local/docker/monty execution, observability).
GEPA docs (main site): https://gepa-ai.github.io/gepa/
GEPA PR (status may change):
https://github.com/gepa-ai/gepa/pull/225
Whether the adapter lands in GEPA mainline now or later, SuperCodeMode can be used directly for GEPA-based optimization of Code Mode behavior.
- automatic server code mutation
- automatic deploy pipelines for MCP servers
- provider-specific server-side optimization logic
This project is focused on client-side behavior optimization and runnable demos.
- Docs homepage: https://superagenticai.github.io/supercodemode/
- Getting started: https://superagenticai.github.io/supercodemode/getting-started/
- Examples and guides: https://superagenticai.github.io/supercodemode/guides/
- CLI reference: https://superagenticai.github.io/supercodemode/reference/cli/
- Operations: https://superagenticai.github.io/supercodemode/operations/
Run docs locally:
mkdocs serveBuild docs:
mkdocs buildscmuses installedgepaandmcpfrom your environment- a vendored GEPA contribution snapshot exists in
vendor/gepa_new_files - refresh vendor snapshot with:
GEPA_SOURCE_DIR=/path/to/gepa ./scripts/sync_gepa_vendor.sh
Build and publish with uv:
uv build
uv publishIf publishing to PyPI, make sure your credentials/token are configured for uv publish.
