This is an end-to-end trace of a typical Loop request.
Use it when you need to answer: "what happened, where, and why?"
This walkthrough covers:
- Request intake and context loading.
- Complexity scoring and mode decision.
- Execution through orchestrator, REPL, and LLM layers.
- Trace, memory, and evidence persistence.
- Final response shaping for adapter surfaces.
Common entry points:
- Rust API callers.
- Adapter surfaces in
rlm-core/src/adapters/. - Binding callers via
pybind/,ffi/, and Go wrappers.
Start debugging in:
rlm-core/src/adapters/claude_code/adapter.rsrlm-core/src/orchestrator.rs
Inputs typically include:
- Prompt text.
- Session message history.
- Files and tool outputs.
- Memory query results, if enabled.
Relevant modules:
rlm-core/src/context/rlm-core/src/adapters/claude_code/adapter.rsrlm-core/src/memory/
Expected outputs of this step:
- Structured request context.
- Candidate signals for complexity analysis.
The runtime classifies prompt characteristics and computes activation signals.
Relevant modules:
rlm-core/src/complexity.rsrlm-core/src/adapters/claude_code/hooks.rs
Typical signals:
- Multi-file scope.
- Architecture analysis intent.
- User thoroughness markers.
- Request for speed-only answers.
Expected outputs of this step:
- Activation recommendation.
- Mode preference (
fast,balanced,thoroughstyle posture).
Decision points:
- Use recursive orchestration or fast-path response.
- Select model routing strategy.
- Select fallback behavior if strict output parsing fails.
Relevant modules:
rlm-core/src/orchestrator.rsrlm-core/src/llm/router.rsrlm-core/src/signature/validation.rsrlm-core/src/signature/fallback.rs
Expected outputs of this step:
- Concrete execution plan.
- Route metadata and budget posture.
Execution may involve:
- REPL-backed context operations.
- LLM calls or batched calls.
- Module-level transformations.
- Structured output generation.
Relevant modules:
rlm-core/src/repl.rsrlm-core/src/llm/client.rsrlm-core/src/llm/batch.rsrlm-core/src/module/
Expected outputs of this step:
- Candidate response payload.
- Execution metadata (costs, signals, timing).
The runtime records what happened for later debugging and learning.
Relevant modules:
rlm-core/src/reasoning/trace.rsrlm-core/src/reasoning/store.rsrlm-core/src/memory/store.rsrlm-core/src/trajectory.rs
Expected outputs of this step:
- Traceable execution events.
- Optional memory updates.
- Recoverable diagnostics context.
Response adapters shape final outputs with metadata suitable for callers.
Relevant modules:
rlm-core/src/adapters/claude_code/adapter.rsrlm-core/src/adapters/tui/adapter.rs
Expected outputs of this step:
- User-facing result.
- Metadata about mode, signals, and usage.
When behavior looks wrong, run:
cd /Users/rand/src/loop
rg -n "should_activate|mode|fallback|memory" rlm-core/src/adapters rlm-core/src/orchestrator.rs rlm-core/src/complexity.rsThen:
- Confirm signal extraction path.
- Confirm decision branch selected.
- Confirm execution path produced expected metadata.
- Confirm persistence path did not drop critical context.
- Wrong activation choice:
- Inspect
complexity.rsand adapter hook signal propagation.
- Inspect
- Correct decision, wrong output shape:
- Inspect signature validation and fallback extraction.
- Correct output, missing traceability:
- Inspect trajectory and reasoning persistence paths.
- Tests pass, integration still odd:
- Run scenario gate
make claude-adapter-gate.
- Run scenario gate
architecture.mdmodule-map.mdooda-and-execution.md../troubleshooting/incident-playbook.md
The system is complex, not mysterious. This walkthrough exists to keep it that way.