Skip to content

Feature request: native llm step type for AI agent flows #94

@ameet

Description

@ameet

Problem

Calling an LLM from a flow currently requires a 3-step workaround:

  1. file-write — save prompt to temp file
  2. bash — pipe to claude --print with --tools and --allowed-tools flags
  3. code — parse the envelope ({"type":"result","result":"..."}) and strip markdown fences

This pattern is repeated in nearly every AI-powered flow. It introduces:

  • Fragile string parsing (envelope unwrapping, fence-stripping, preamble removal)
  • Double flag requirement (--tools + --allowed-tools) that's easy to forget
  • Timeout management at the bash level instead of step level
  • No structured error when the LLM call fails vs. when the shell command fails

Proposal

A native step type:

{
  "id": "analyzeData",
  "type": "llm",
  "llm": {
    "model": "claude-sonnet-4-6",
    "prompt": "{{$.steps.buildPrompt.output.text}}",
    "outputFormat": "json",
    "tools": ["WebSearch", "WebFetch"],
    "timeout": 120000
  }
}

The runtime would handle:

  • Envelope unwrapping automatically
  • JSON parsing when outputFormat: "json"
  • Structured error on failure (distinguishable from a shell error)
  • $.steps.analyzeData.output returns the parsed result directly

Why this matters

LLM calls are the primary use case for AI agent flows. Making them a first-class step type would eliminate the most common source of boilerplate and parsing bugs across flow authors.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions