Skip to content

Conversation

@daniel-lxs
Copy link
Member

Summary

  • Implements the RooMessage storage layer — a new RooMessage type system that wraps AI SDK ModelMessage with metadata (timestamps, condense/truncation IDs), replacing the legacy Anthropic-format ApiMessage
  • Uses result.response.messages from the AI SDK for assistant message storage instead of manually reconstructing messages from stream events and injecting custom blocks (thinking, redacted_thinking, thoughtSignature)
  • Eliminates validation errors on second turns caused by custom content blocks that don't match the AI SDK's ModelMessage schema
  • Removes dead side-channel handler methods (getThoughtSignature, getRedactedThinkingBlocks, getReasoningDetails, getSummary) and their backing fields from all providers
  • Adds backward-compatible converter (anthropicToRoo.ts) for loading old Anthropic-format conversation histories
  • Removes orphaned per-provider caching transform files and footgun prompting (file-based system prompt override)

Test plan

  • Full test suite passes (366 files, 5469 tests, 0 failures)
  • TypeScript compilation clean (tsc --noEmit)
  • ESLint passes with 0 warnings
  • Pre-commit hooks (prettier + eslint) pass
  • Manual smoke test: Anthropic with extended thinking — verify second turn works, reasoning round-trips
  • Manual smoke test: Gemini with thought signature — verify tool use continues correctly
  • Manual smoke test: OpenRouter with reasoning_details — verify reasoning chain continuity
  • Manual smoke test: OpenAI Native with encrypted reasoning — verify encrypted content round-trips
  • Manual smoke test: Non-AI-SDK provider — verify fallback path still works

🤖 Generated with Claude Code

daniel-lxs and others added 8 commits February 10, 2026 22:20
…igration

Add foundation infrastructure for migrating message storage from Anthropic
format to AI SDK ModelMessage format (EXT-646).

- RooMessage type = AI SDK ModelMessage + Roo metadata (ts, condense, truncation)
- Anthropic-to-RooMessage converter for migrating old conversations on read
- Versioned storage (readRooMessages/saveRooMessages) with auto-detection
- flattenModelMessagesToStringContent utility for providers needing string content

No behavior change - purely additive. Existing code paths untouched.
EXT-647 will wire Task.ts to use these new functions.
…s/index.ts and unnecessary re-exports

- Narrow summary from any[] to Array<{ type: string; text: string }> matching EncryptedReasoningItem
- Delete unused converters/index.ts barrel (knip: unused file)
- Remove UserContent, AssistantContent, ToolContent re-exports (importable from 'ai' directly)
- Remove FlattenMessagesOptions re-export from index.ts
Complete migration from Anthropic.MessageParam to RooMessage (ModelMessage + metadata)
for internal message storage and the entire provider pipeline.

Key changes:
- Task.ts: apiConversationHistory is now RooMessage[], stored via readRooMessages/saveRooMessages
- Message construction: userMessageContent uses TextPart/ImagePart, pendingToolResults uses ToolResultPart
- ApiHandler.createMessage() accepts RooMessage[] instead of Anthropic.Messages.MessageParam[]
- All 30 providers updated: messages passed directly to streamText()/generateText()
- Removed convertToAiSdkMessages() calls from all AI SDK providers
- buildCleanConversationHistory simplified from ~140 lines to ~20 lines
- Resume logic rewritten for RooMessage format
- Supporting systems updated: condense, messageManager, ClineProvider

Old Anthropic-format conversations auto-convert on first open via readRooMessages().
New conversations stored in versioned v2 format.

5536 tests pass, 0 failures.

EXT-647
Replace `as any` / `as unknown as` casts with proper RooMessage types
across the storage and API pipeline:

- Add UserContentPart alias and content-part type guards to rooMessage.ts
- Migrate maybeRemoveImageBlocks, formatResponse, processUserContentMentions,
  validateAndFixToolResultIds, condense system, and convertToOpenAiMessages
  from Anthropic SDK types to AI SDK RooMessage types
- Change initiateTaskLoop/recursivelyMakeClineRequests signatures from
  Anthropic.Messages.ContentBlockParam[] to UserContentPart[]
- Type the assistant message builder in Task.ts, remove double-casts
  in the API pipeline
- Remove unused Anthropic imports from 7 source files
- Update ToolResponse type and presentAssistantMessage to use ImagePart
- All functions accept both AI SDK (tool-call/tool-result) and legacy
  Anthropic (tool_use/tool_result) formats for backward compatibility

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add RooRoleMessage type, dual-format legacy block interfaces
(LegacyToolUseBlock, LegacyToolResultBlock), union types
(AnyToolCallBlock, AnyToolResultBlock), type guards, and accessor
helpers to rooMessage.ts. Replace scattered `as any` casts across
condense, context-management, Task.ts, validateToolResultIds.ts, and
openai-format.ts with these shared typed utilities. Remove leaked
Anthropic SDK import from context-management in favor of a
provider-agnostic ContentBlockParam type.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- toolResultToText: handle { type, value } object shape returned by
  getToolResultContent for AI SDK ToolResultPart.output, preventing
  tool result content from being lost during condensation summaries
- export-markdown: add tool-call/tool-result (AI SDK format) handling
  alongside legacy tool_use/tool_result, preventing post-migration
  conversations from rendering as [Unexpected content type: tool-call]

Addresses review feedback from PR #11386.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instead of manually reconstructing assistant messages from stream events
and injecting custom blocks (thinking, redacted_thinking, thoughtSignature)
via side-channel handler methods, grab the fully-formed AssistantModelMessage
from the AI SDK's result.response.messages and store it directly. This
eliminates validation errors on subsequent turns caused by custom blocks
that don't match the AI SDK's ModelMessage schema.

- Add ApiStreamResponseMessageChunk type to stream chunk union
- Add yieldResponseMessage helper and wire it into consumeAiSdkStream
- All 24 AI SDK providers now yield response_message after streaming
- Task.ts captures and stores native-format messages directly
- addToApiConversationHistory detects native format (providerOptions on
  content parts) and skips legacy block injection
- buildCleanConversationHistory passes native-format messages through
- Remove dead side-channel methods: getThoughtSignature,
  getRedactedThinkingBlocks, getReasoningDetails, getSummary and their
  backing fields from all providers
- Add redacted_thinking conversion in anthropicToRoo.ts for backward compat

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. Enhancement New feature or request labels Feb 11, 2026
@roomote
Copy link
Contributor

roomote bot commented Feb 11, 2026

Rooviewer Clock   See task

Re-reviewed after 984857e. Both commits address all previously flagged issues: getTaskWithId now uses readRooMessages for v2 envelope handling, saveApiConversationHistory checks return values, getImageDataUrl properly passes through data/HTTP URLs, and pendingToolResults are preserved on save failure. No new issues found.

  • Bug: getTaskWithId in ClineProvider.ts does raw JSON.parse without handling the new v2 versioned envelope format, breaking markdown export for tasks saved after this PR
  • Nit: Redundant double as ModelMessage[] cast in bedrock.ts
Previous reviews

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

hannesrudolph

This comment was marked as outdated.

hannesrudolph

This comment was marked as outdated.

- Remove redundant double cast in bedrock.ts (as ModelMessage[] as ModelMessage[])
- Use isAnyToolCallBlock/getToolCallId in condense orphan filter to handle
  both AI SDK tool-call and legacy tool_use formats
- Fix misleading comment and remove dead conditional in processUserContentMentions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Collaborator

@hannesrudolph hannesrudolph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weeeeee

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 11, 2026
@daniel-lxs daniel-lxs merged commit e6f0e79 into main Feb 11, 2026
13 checks passed
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Feb 11, 2026
@daniel-lxs daniel-lxs deleted the feature/ext-647-implement-modelmessage-storage-layer branch February 11, 2026 18:58
hannesrudolph added a commit that referenced this pull request Feb 11, 2026
Move cache breakpoint logic from individual providers to a shared utility
called from Task.ts before createMessage(). Messages arrive at providers
pre-annotated with providerOptions, and the AI SDK routes the correct
options to the active provider automatically.

New files:
- src/api/transform/prompt-cache.ts: resolveCacheProviderOptions() +
  applyCacheBreakpoints() with provider adapter mapping
- src/api/transform/__tests__/prompt-cache.spec.ts: 14 test cases

Changes per provider:
- anthropic.ts: removed targeting block + applyCacheControlToAiSdkMessages()
- anthropic-vertex.ts: same
- minimax.ts: same
- bedrock.ts: removed targeting block + applyCachePointsToAiSdkMessages()

Key improvements:
- Targets non-assistant batches (user + tool) instead of only role=user.
  After PR #11409, tool results are separate role=tool messages that now
  correctly receive cache breakpoints.
- Single source of truth: cache strategy defined once in prompt-cache.ts
- Provider-specific config preserved: Bedrock gets 3 breakpoints + anchor,
  Anthropic family gets 2 breakpoints

Preserved (untouched):
- systemProviderOptions in all providers' streamText() calls
- OpenAI Native promptCacheRetention (provider-level, not per-message)
- Bedrock usePromptCache opt-in + supportsAwsPromptCache()

5,491 tests pass, 0 regressions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request lgtm This PR has been approved by a maintainer size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants