Skip to content

feat(google-gemini): update model YAMLs [bot]#369

Open
hganwani-droid wants to merge 1 commit intomainfrom
bot/update-google-gemini-20260319-105004
Open

feat(google-gemini): update model YAMLs [bot]#369
hganwani-droid wants to merge 1 commit intomainfrom
bot/update-google-gemini-20260319-105004

Conversation

@hganwani-droid
Copy link
Collaborator

@hganwani-droid hganwani-droid commented Mar 19, 2026

Auto-generated by poc-agent for provider google-gemini.


Note

Medium Risk
Updates model metadata that governs token limits, supported modalities, and enabled features; incorrect values could cause request validation/routing issues or truncated outputs at runtime.

Overview
Refreshes Google Gemini model YAML metadata to better match current model capabilities.

Adds/normalizes max_tokens and explicit modalities (including output modalities) across several models, introduces/updates context_window and other limits (notably expanding deep-research-pro-preview-12-2025 to a 1M context window while adjusting others), and tweaks feature flags (e.g., removes some structured_output, adds code_execution).

Also updates embedding metadata (output_vector_size) and adjusts specific model limits like nano-banana-pro-preview input tokens and gemini-3.1-flash-image-preview output token cap/param max.

Written by Cursor Bugbot for commit 6ddca28. This will update automatically on new commits. Configure here.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

limits:
max_input_tokens: 8192
max_output_tokens: 2048
context_window: 32000
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing token limits after removing max_input/output_tokens

High Severity

The limits section for gemma-3n-e4b-it now only contains context_window: 32000 — both max_input_tokens and max_output_tokens were removed, and max_tokens was never added. Every other model in this provider (including the sibling gemma-3n-e2b-it) retains all three token limit fields alongside context_window. This likely breaks token limit enforcement for this model.

Fix in Cursor Fix in Web

- text
- audio
- image
- video
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image modality dropped from 12-2025 audio preview

Medium Severity

The image input modality was removed and replaced with video for gemini-2.5-flash-native-audio-preview-12-2025. However, the older gemini-2.5-flash-native-audio-preview-09-2025 model in this same PR correctly retains both image and video. This appears to be an accidental replacement rather than an intentional removal, as both models in this family support image input.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant