Skip to content

feat: add GLM-5.1 model support to Z.ai provider#12077

Draft
roomote-v0[bot] wants to merge 1 commit intomainfrom
feature/add-glm-5.1-zai-provider
Draft

feat: add GLM-5.1 model support to Z.ai provider#12077
roomote-v0[bot] wants to merge 1 commit intomainfrom
feature/add-glm-5.1-zai-provider

Conversation

@roomote-v0
Copy link
Copy Markdown
Contributor

@roomote-v0 roomote-v0 bot commented Apr 8, 2026

Related GitHub Issue

Closes: #12024

Description

This PR attempts to address Issue #12024 by adding GLM-5.1 model support to the Z.ai provider. Feedback and guidance are welcome.

Changes:

  • Added glm-5.1 to internationalZAiModels with specs from the issue comment:
    • Context window: 200K (200,000 tokens)
    • Max output tokens: 128K (131,072 tokens)
    • Input price: $1.4, Cached input: $0.26, Output: $4.4
    • Thinking/reasoning support (same as GLM-5): supportsReasoningEffort: ["disable", "medium"]
    • Text only (no vision)
  • Added glm-5.1 to mainlandZAiModels with estimated mainland pricing (derived from the international/mainland ratio of existing models like GLM-5). Mainland pricing may need verification.
  • No changes needed to src/api/providers/zai.ts since the handler dynamically detects thinking models via Array.isArray(info.supportsReasoningEffort)

Note: Mainland (China) pricing for GLM-5.1 was estimated proportionally from existing GLM-5 price ratios since only international pricing was provided in the issue. This should be verified against actual mainland pricing.

Test Procedure

  • Added 2 new test cases in src/api/providers/__tests__/zai.spec.ts:
    • GLM-5.1 international model with thinking support and 128k max output
    • GLM-5.1 China model with thinking support and 128k max output
  • All 35 tests pass: cd src && npx vitest run api/providers/__tests__/zai.spec.ts
  • All lint checks pass
  • All type checks pass across all 14 packages

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes.
  • Documentation Impact: No documentation updates required -- this adds a new model entry to an existing provider.
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Screenshots / Videos

N/A -- no UI changes.

Documentation Updates

No documentation updates are required.

Additional Notes

  • The GLM-5.1 model uses the same thinking/reasoning options as GLM-5 (enable/disable), so no handler code changes were needed.
  • Mainland pricing is estimated and may need adjustment once official mainland pricing is available.

Interactively review PR in Roo Code Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] Add support for GLM-5.1 via Z.ai provider

1 participant