Skip to content

feat: add MiniMax as first-class LLM provider#144

Open
octo-patch wants to merge 1 commit intoEverMind-AI:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#144
octo-patch wants to merge 1 commit intoEverMind-AI:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as a dedicated LLM backend adapter alongside OpenAI, Anthropic, and Gemini. MiniMax provides an OpenAI-compatible API with powerful models:

  • MiniMax-M2.7 — latest flagship model with 1M context window
  • MiniMax-M2.7-highspeed — speed-optimized variant
  • MiniMax-M2.5 — previous generation, 204K context
  • MiniMax-M2.5-highspeed — speed-optimized, 204K context

Changes

  • New adapter: src/core/component/llm/llm_adapter/minimax_adapter.pyMiniMaxAdapter extending LLMBackendAdapter via OpenAI SDK, with:
    • Temperature clamping to [0.01, 1.0] range for MiniMax API compatibility
    • <think>...</think> tag stripping for reasoning model responses
    • MINIMAX_API_KEY environment variable auto-detection
  • Factory routing: Updated OpenAICompatibleClient to route provider: minimax to MiniMaxAdapter
  • Backend config: Added minimax entry in llm_backends.yaml with 4 models
  • Env template: Added MINIMAX_API_KEY section in env.template
  • Documentation: Updated README.md and README.zh.md to list MiniMax as a supported backend
  • Tests: 35 tests (32 unit + 3 integration) covering initialization, temperature clamping, think-tag stripping, streaming, error handling, and live API calls

Test plan

  • 32 unit tests pass with mocked API calls
  • 3 integration tests pass against live MiniMax API
  • Verify minimax backend is properly configured in llm_backends.yaml
  • Verify OpenAICompatibleClient routes to MiniMaxAdapter for provider: minimax
  • Set MINIMAX_API_KEY and select minimax as default_backend in llm_backends.yaml, run the application

7 files changed, 665 additions

Add MiniMax (https://www.minimax.io) as a dedicated LLM backend adapter
alongside OpenAI, Anthropic, and Gemini. MiniMax provides an OpenAI-compatible
API with models like MiniMax-M2.7 (1M context) and MiniMax-M2.5-highspeed
(204K context, optimized for speed).

Changes:
- Add MiniMaxAdapter in src/core/component/llm/llm_adapter/minimax_adapter.py
  with temperature clamping and <think> tag stripping for reasoning models
- Register "minimax" provider in OpenAICompatibleClient factory
- Add minimax backend config in llm_backends.yaml (M2.7, M2.7-highspeed,
  M2.5, M2.5-highspeed models)
- Add MINIMAX_API_KEY env var in env.template
- Update README.md and README.zh.md to list MiniMax as supported backend
- Add 35 tests (32 unit + 3 integration) in tests/test_minimax_adapter.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant