Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 38 additions & 1 deletion providers/openai/gpt-5.4-mini-2026-03-17.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,39 @@
mode: unknown
costs:
- cache_read_input_token_cost: 7.5e-8
input_cost_per_token: 7.5e-7
input_cost_per_token_batches: 3.75e-7
output_cost_per_token: 0.0000045
output_cost_per_token_batches: 0.00000225
region: "*"
features:
- function_calling
- parallel_function_calling
- tool_choice
- prompt_caching
- structured_output
- system_messages
limits:
context_window: 400000
max_output_tokens: 128000
max_tokens: 128000
modalities:
input:
- text
- image
output:
- text
mode: chat
model: gpt-5.4-mini-2026-03-17
params:
- defaultValue: 128
key: max_tokens
maxValue: 128000
minValue: 1
- defaultValue: medium
key: reasoning_effort
sources:
- https://developers.openai.com/api/docs/pricing
- https://developers.openai.com/api/docs/deprecations
- https://developers.openai.com/api/docs/guides/reasoning
- https://developers.openai.com/api/docs/guides/structured-outputs
thinking: true
38 changes: 37 additions & 1 deletion providers/openai/gpt-5.4-mini.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,38 @@
mode: unknown
costs:
- cache_read_input_token_cost: 7.5e-8
input_cost_per_token: 7.5e-7
input_cost_per_token_batches: 3.75e-7
output_cost_per_token: 0.0000045
output_cost_per_token_batches: 0.00000225
region: "*"
features:
- function_calling
- structured_output
- prompt_caching
- tools
- tool_choice
- system_messages
Comment thread
LordGameleo marked this conversation as resolved.
limits:
context_window: 400000
max_output_tokens: 128000
max_tokens: 128000
modalities:
input:
- text
- image
output:
- text
mode: chat
model: gpt-5.4-mini
params:
- defaultValue: 128000
Comment thread
LordGameleo marked this conversation as resolved.
key: max_tokens
maxValue: 128000
minValue: 1
- defaultValue: medium
key: reasoning_effort
sources:
- https://developers.openai.com/api/docs/models/gpt-5.4-mini
- https://developers.openai.com/api/docs/pricing
- https://developers.openai.com/api/docs/guides/latest-model
thinking: true
35 changes: 34 additions & 1 deletion providers/openai/gpt-5.4-nano-2026-03-17.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,35 @@
mode: unknown
costs:
- cache_read_input_token_cost: 2.e-8
input_cost_per_token: 2.e-7
output_cost_per_token: 0.00000125
region: "*"
features:
- function_calling
- parallel_function_calling
- tool_choice
- system_messages
- structured_output
- prompt_caching
limits:
context_window: 400000
max_output_tokens: 128000
max_tokens: 128000
modalities:
input:
- text
- image
output:
- text
mode: chat
model: gpt-5.4-nano-2026-03-17
params:
- defaultValue: 128
key: max_tokens
maxValue: 128000
minValue: 1
- defaultValue: medium
key: reasoning_effort
type: string
sources:
- https://developers.openai.com/api/docs/models/gpt-5.4-nano
thinking: true
30 changes: 29 additions & 1 deletion providers/openai/gpt-5.4-nano.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,30 @@
mode: unknown
costs:
- cache_read_input_token_cost: 2.e-8
input_cost_per_token: 2.e-7
output_cost_per_token: 0.00000125
region: "*"
features:
- function_calling
- structured_output
- tool_choice
- system_messages
Comment thread
LordGameleo marked this conversation as resolved.
limits:
context_window: 400000
max_output_tokens: 128000
max_tokens: 128000
modalities:
input:
- text
- image
output:
- text
mode: chat
model: gpt-5.4-nano
params:
- defaultValue: 128
key: max_tokens
maxValue: 128000
minValue: 1
sources:
- https://developers.openai.com/api/docs/models/gpt-5.4-nano
thinking: true
Comment thread
LordGameleo marked this conversation as resolved.
Loading