Skip to content

Evaluate: Extract standalone LLM interaction crate (Pi layered architecture pattern) #684

@AlexMikhalev

Description

@AlexMikhalev

Parent Epic

#682 -- Evaluate Pi architectural patterns

Pattern

Pi uses a 3-layer architecture where each layer is independently usable:

pi-ai (unified LLM API, multi-provider streaming, token/cost tracking)
    |
pi-agent-core (stateful agent loop, tool execution, events, state management)
    |
pi-coding-agent (TUI, sessions, extensions, skills, themes)

Any consumer can use pi-ai alone for LLM streaming, pi-agent-core for an agent without UI, or the full stack.

Current State

LLM interaction code in terraphim-ai is scattered across:

  • terraphim_orchestrator -- spawns CLI agents (subprocess management)
  • terraphim_multi_agent -- agent configurations and model routing
  • terraphim_llm_proxy -- HTTP proxy with model routing
  • Ad-hoc scripts in cto-executive-system/automation/

There is no standalone crate for "call an LLM, get a response, track tokens/cost" that other crates can depend on without pulling in the full orchestrator or multi-agent machinery.

Evaluation Questions

  1. What belongs in a standalone LLM crate? Candidates: provider abstraction, streaming response types, token counting, cost calculation, context serialisation, model registry.
  2. Does this overlap with existing crates? Check terraphim_llm_proxy (already has model routing) and terraphim_multi_agent (has agent configs).
  3. Would this simplify ADF? Currently terraphim_orchestrator shells out to claude -p and codex exec. A Rust-native LLM crate could replace some subprocess calls.
  4. What about the Claude Agent SDK? The Agent SDK (Epic: Evaluate Pi (badlogic/pi-mono) architectural patterns for terraphim-ai #682 context) provides a Python/TypeScript library for Claude. A Rust LLM crate would serve a different purpose (direct API calls, multi-provider, no agent loop).

Acceptance Criteria

  • Map all LLM interaction points across the 54-crate workspace
  • Define proposed crate boundary (what goes in, what stays out)
  • Assess overlap with terraphim_llm_proxy and terraphim_multi_agent
  • Decision: extract, refactor existing, or reject with rationale

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions