Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| max_output_tokens: 32768 | ||
| max_tokens: 32768 | ||
| max_output_tokens: 4096 | ||
| max_tokens: 4096 |
There was a problem hiding this comment.
Inconsistent max output tokens across vision model variants
Medium Severity
This PR reduces max_output_tokens and max_tokens from 32768 to 4096 in grok-2-vision-1212.yaml, while simultaneously adding max_tokens: 32768 to grok-2-vision-latest.yaml (which retains max_output_tokens: 32768). Since -latest typically aliases to the -1212 dated snapshot, these are contradictory changes to equivalent models. External documentation indicates the model supports 32768 output tokens. The reduction to 4096 could incorrectly prevent users from generating longer outputs.
Additional Locations (1)
| input: | ||
| - text | ||
| - image | ||
| output: |
There was a problem hiding this comment.
Missing removeParams for image generation model variant
Low Severity
This PR adds removeParams (stripping max_tokens, temperature, top_p, n, stop, stream) to grok-2-image-latest.yaml but not to its equivalent dated snapshot grok-2-image-1212.yaml, even though both files were modified in this PR. Every other mode: image model that was updated in this PR received removeParams. Without it, chat-oriented parameters from default.yaml may be sent to the image generation API, potentially causing errors.


Auto-generated by poc-agent for provider
xai.Note
Medium Risk
Updates xAI model metadata (limits, supported modalities/features, and pricing fields), which can change request validation and cost calculations for Grok models. Risk is moderate due to potential downstream assumptions about token limits and allowed params.
Overview
Refreshes xAI Grok provider YAMLs to better reflect current capabilities and pricing, including adding/standardizing
modalities(explicit text/image/video inputs and text/image outputs) and expandingfeatures(e.g.,structured_output,prompt_caching).Adjusts model
limits/params(notably Grok-2 Vision output token caps and addingcontext_window/max_tokensfields across several Grok-3/4 configs) and introduces batch pricing fields (*_batches) for multiple Grok-3/4 models.Adds/updates request shaping via
removeParamsfor image/video models (dropping generation-only chat params likemax_tokens,temperature,top_p,stop,stream) and marks several Grok-4-family configs asthinking: true.Written by Cursor Bugbot for commit 277c1a2. This will update automatically on new commits. Configure here.