fix(ai): strip openai itemID from tool call metadata #889
fix(ai): strip openai itemID from tool call metadata #889VaguelySerious merged 1 commit intovercel:mainfrom
Conversation
…all errors Signed-off-by: voyager14 <21mh124@queensu.ca>
🦋 Changeset detectedLatest commit: 6f8c6eb The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
@michael-han-dev is attempting to deploy a commit to the Vercel Labs Team on Vercel. A member of the Team first needs to authorize it. |
| } | ||
|
|
||
| return meta; | ||
| } |
There was a problem hiding this comment.
@michael-han-dev — My initial approach followed this pattern, where I did not provide (itemId + reasoningItem) or previousResponseId. However, I later updated the implementation to include previousResponseId and raised PR #886 to address this.
I wanted to confirm my understanding:
without (itemId + reasoningItem) or previousResponseId, the code will still function, but I’m unclear on whether the reasoning models would be able to access and leverage the prior reasoning text to improve performance. If the reasoning text is not actually available to the models at inference time, this approach may be less effective than intended.
Could you help clarify whether this assumption is correct, or if I might be overlooking something in how the reasoning context is preserved?
There was a problem hiding this comment.
hi @bhuvaneshprasad thanks for bringing this up! adding previousresponseID would definitely add a bonus of giving the model access to prior reasoning during tool calls. I didn't add it here since it would create a dependency on openai's server-side state, which breaks the self-contained design of durableagent (among other things). wanted to keep this pr minimal and match the existing behaviour.
| "@workflow/ai": patch | ||
| --- | ||
|
|
||
| strip OpenAI itemId from providerMetadata to fix Responses API tool call errors |
There was a problem hiding this comment.
| strip OpenAI itemId from providerMetadata to fix Responses API tool call errors | |
| Strip OpenAI itemId from providerMetadata to fix Responses API tool call errors, fixes #880 |
|
|
||
| /** | ||
| * Strip OpenAI's itemId from providerMetadata (requires reasoning items we don't preserve). | ||
| * Preserves all other provider metadata (e.g., Gemini's thoughtSignature). |
There was a problem hiding this comment.
| * Preserves all other provider metadata (e.g., Gemini's thoughtSignature). | |
| * Preserves all other provider metadata (e.g., Gemini's thoughtSignature). | |
| * See https://github.com/vercel/workflow/issues/880 |
|
I'm considering merging this PR first, and the re-evaluating #886 if @bhuvaneshprasad still wants to pursue that |
|
I followed up based @bhuvaneshprasad comment. I ran into the discrepancy as part of our work for better AI SDK + Workflow compat |
Description
OpenAI's responses API includes an itemId in tool call metadata. When we pass that back in the next request, OpenAI expects the matching reasoning item to be there too. DurableAgent doesn't keep reasoning items around, so tool calls were failing with errors about missing reasoning items. see #880
This change strips itemId from OpenAI metadata before we add tool calls to conversation history. We still keep other metadata like Gemini's thoughtSignature. We don't need previousResponseId tracking since DurableAgent already sends the full conversation history.
Fixes #880
How did you test your changes?
Added 3 unit tests: one for stripping just itemId, one for keeping other OpenAI fields, and one for mixed provider metadata. Also tested manually with the flight-booking-app example using openai('gpt-5-mini') and tool calls work now.
PR Checklist - Required to merge
pnpm changesetwas run to create a changelog for this PRpnpm changeset --emptyif you are changing documentation or workbench appsgit commit --signoffon your commits)