Skip to content

feat: enhanced AI code generator with multi-provider support#2146

Merged
GermanBluefox merged 2 commits intoioBroker:masterfrom
Eistee82:master
Mar 22, 2026
Merged

feat: enhanced AI code generator with multi-provider support#2146
GermanBluefox merged 2 commits intoioBroker:masterfrom
Eistee82:master

Conversation

@Eistee82
Copy link
Contributor

Summary

  • Per-provider API config: Separate API key fields and test buttons for OpenAI, Anthropic, Gemini, DeepSeek, and custom/local providers (Ollama, LM Studio)
  • Two-step code generation: Plan-then-code approach — the model first creates an implementation plan identifying relevant devices and logic, then generates code based on the plan
  • Optimized prompts: Concrete code examples, WRONG/CORRECT pairs, and compact function signatures for reliable results even with small local models (14B+)
  • Improved UX: Status display ("Planning..." / "Generating code..."), collapsible plan view, flexible result area height, TODO_DEVICE_ID placeholders for missing devices
  • Compact API reference: docs-compact.md (12.5KB) replaces the full docs.md (69KB) while keeping all function signatures
  • Local model optimizations: Disable reasoning/thinking via reasoning_effort: 'none' for Ollama models, reducing response time significantly
  • Build fixes: Node 25 compatibility (rmSync instead of deprecated rmdirSync), admin/img/ preserved during clean builds
  • Documentation: Updated EN/DE docs with local model requirements (minimum 14B, 12GB VRAM), per-provider config table, tested model recommendations

Test plan

Tested with:

  • Gemini (gemini-3.1-flash-lite-preview): Config test button, model loading, code generation
  • Ollama (qwen2.5-coder:14b on RTX 3060): Config test button, model loading, code generation with various prompts (device control, scheduling, Telegram notifications, HTTP API calls, multi-device logic)
  • DeepSeek: Config test button, model loading
  • OpenAI: Not tested (no API key available)
  • Anthropic: Not tested (no API key available)

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com

@Eistee82 Eistee82 force-pushed the master branch 2 times, most recently from b02eb48 to 1544d02 Compare March 20, 2026 22:28
…mized prompts

Multi-provider API support:
- Per-provider test buttons in config (OpenAI, Anthropic, Gemini, DeepSeek, Custom)
- Optional API key for custom base URL (Ollama, LM Studio)
- Provider icons in config and code generator
- Human-readable HTTP error messages with API response details
- Disable reasoning/thinking for local models (reasoning_effort: none)

Two-step plan-then-code generation:
- Step 1: Model analyzes task and creates implementation plan with device IDs
- Step 2: Model generates code based on plan + API examples
- Collapsible plan view in UI for debugging
- Status display ("Planning..." / "Generating code...")
- 600s timeout for local models

Optimized prompts for small local models (tested with qwen2.5-coder:14b):
- Concrete code examples showing correct on(), setState(), getState() syntax
- WRONG/CORRECT pairs for common mistakes (adapter.set, console.log, etc.)
- Compact function signature list covering all 80+ API functions
- FORBIDDEN list preventing wrong patterns
- Task placed after device list (recency bias)
- TODO_DEVICE_ID placeholder for missing devices

Build fixes:
- Node 25 compatibility: rmSync instead of rmdirSync({recursive:true})
- admin/img/ preserved during clean builds
- Flexible result area height (flex:1 instead of fixed calc)

New documentation files:
- docs-compact.md: full API in 12.5KB (used for code generation)
- docs-essential.md: core functions with examples in 2.6KB
- Original docs.md preserved for reference

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@GermanBluefox GermanBluefox merged commit a01e2c6 into ioBroker:master Mar 22, 2026
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants