Problem
get currently has separate output execution paths for batch and stream behavior. That split makes internal evolution harder, duplicates output orchestration logic, and complicates adding bounded async buffering with consistent semantics.
We want batch behavior without a dedicated second pipeline: batch should be an execution policy of the same stream-oriented architecture.
Design
Use one async producer/consumer output pipeline and select commit policy:
StreamCommit: write incrementally to stdout/file.
BatchCommit: write to memory, flush only after full success.
To support real bounded queue backpressure, source emission must be async-awaitable (async sink contract), not sync callback based.
Scope
- Refactor source stream emission contract from sync callback to async sink.
- Implement one shared output pipeline in
get with bounded queue and single consumer formatter.
- Treat batch as commit policy (no separate formatter architecture).
- Keep stable output order (provider emission order).
- Keep existing formatters and apply policy-based behavior:
- stream-friendly:
text, jsonl/ndjson,
- aggregate-first under stream mode:
json, yaml auto-resolve to batch with warning.
- Preserve existing structured error categories/exit semantics.
Boundary
- No new user-facing flags/config for queue size in this phase.
- No format redesign and no new output format additions.
- No change to provider data model semantics beyond sink-contract adaptation.
Acceptance Criteria
- One shared output orchestration path handles both batch and stream behavior.
- Batch behavior is all-or-nothing via commit policy, not a dedicated separate implementation.
- Stream mode preserves incremental output for
text and jsonl/ndjson.
json/yaml in stream mode are auto-upgraded to batch policy with warning.
- End-to-end ordering is stable and deterministic.
- Tests cover ordering, batch commit semantics, stream partial-output semantics, and mode/format resolution behavior.
Context
This follows current performance/concurrency improvements and aligns with the requirement to keep batch support without carrying a fully separate output implementation.
Problem
getcurrently has separate output execution paths for batch and stream behavior. That split makes internal evolution harder, duplicates output orchestration logic, and complicates adding bounded async buffering with consistent semantics.We want batch behavior without a dedicated second pipeline: batch should be an execution policy of the same stream-oriented architecture.
Design
Use one async producer/consumer output pipeline and select commit policy:
StreamCommit: write incrementally to stdout/file.BatchCommit: write to memory, flush only after full success.To support real bounded queue backpressure, source emission must be async-awaitable (async sink contract), not sync callback based.
Scope
getwith bounded queue and single consumer formatter.text,jsonl/ndjson,json,yamlauto-resolve to batch with warning.Boundary
Acceptance Criteria
textandjsonl/ndjson.json/yamlin stream mode are auto-upgraded to batch policy with warning.Context
This follows current performance/concurrency improvements and aligns with the requirement to keep batch support without carrying a fully separate output implementation.