Skip to content

Responses API: multi-turn conversations 400 on turn 2 when passing response.output back as input #3008

@achandmsft

Description

@achandmsft

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

Passing response.output items back as input via model_dump() on reasoning models compiles fine but 400s on turn 2:

400: Unknown parameter: 'input[1].status'

model_dump() fabricates status: None, encrypted_content: None for fields the API never returned. The API rejects null unknown parameters.

Separately, there's an undocumented pairing constraint: reasoning+message must be consecutive pairs in input. Filtering to only messages (dropping reasoning) also 400s. model_dump(exclude_none=True) fixes the null fabrication but not the pairing constraint.

Root cause is the spec (openai/openai-openapi#536). Same root cause as #2561 (open since Aug 2025, no fix).

Either model_dump() should produce valid input items, or an .as_input() helper should strip output-only fields.

Workaround: previous_response_id instead of manual history.

Related:

To Reproduce

from openai import OpenAI

client = OpenAI()
conversation = []
for msg in ["What is the capital of France?", "And Germany?"]:
    conversation.append({"role": "user", "content": msg})
    response = client.responses.create(
        model="o4-mini", input=conversation, max_output_tokens=200,
    )
    for item in response.output:
        conversation.append(item.model_dump())  # compiles, 400s on turn 2

Dropping reasoning items and keeping only messages also 400s (pairing constraint).

OS

Windows 11, also reproduced on Linux

Python version

Python v3.13

Library version

openai v2.29.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions