-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
Passing response.output items back as input via model_dump() on reasoning models compiles fine but 400s on turn 2:
400: Unknown parameter: 'input[1].status'
model_dump() fabricates status: None, encrypted_content: None for fields the API never returned. The API rejects null unknown parameters.
Separately, there's an undocumented pairing constraint: reasoning+message must be consecutive pairs in input. Filtering to only messages (dropping reasoning) also 400s. model_dump(exclude_none=True) fixes the null fabrication but not the pairing constraint.
Root cause is the spec (openai/openai-openapi#536). Same root cause as #2561 (open since Aug 2025, no fix).
Either model_dump() should produce valid input items, or an .as_input() helper should strip output-only fields.
Workaround: previous_response_id instead of manual history.
Related:
- Responses API: undocumented reasoning+message pairing constraint breaks multi-turn conversations across all SDKs openai-openapi#536 (spec root cause)
- Responses API: undocumented reasoning+message pairing constraint breaks multi-turn conversations #3009 (pairing constraint, separate from this null fabrication bug)
- Responses API: undocumented reasoning+message pairing constraint breaks multi-turn conversations openai-node#1791 (same pairing bug, TypeScript)
- openclaw/openclaw#49167 (OpenClaw, 64.9k forks, production breakage)
- GPT-5 + tool calls: Error code: 400 - Item 'rs_...' of type 'reasoning' was provided without its required following item. openai-agents-python#1660 (Agents SDK)
- ModelHTTPError caused by OpenAI client: Item of type 'function_call' was provided without its required 'reasoning' item pydantic/pydantic-ai#3230 (downstream breakage)
To Reproduce
from openai import OpenAI
client = OpenAI()
conversation = []
for msg in ["What is the capital of France?", "And Germany?"]:
conversation.append({"role": "user", "content": msg})
response = client.responses.create(
model="o4-mini", input=conversation, max_output_tokens=200,
)
for item in response.output:
conversation.append(item.model_dump()) # compiles, 400s on turn 2Dropping reasoning items and keeping only messages also 400s (pairing constraint).
OS
Windows 11, also reproduced on Linux
Python version
Python v3.13
Library version
openai v2.29.0