feat: agent-workflows E2E implementation#652
Merged
AlexMikhalev merged 4 commits intomainfrom Mar 10, 2026
Merged
Conversation
Three bugs fixed to get all 5 agent workflow demos working end-to-end: 1. #[serde(flatten)] nesting bug: Role.extra field with flatten causes JSON "extra" key to nest as extra["extra"]["key"]. Added get_extra_str/get_role_extra_str helpers that check both flat and nested paths in agent.rs and multi_agent_handlers.rs. 2. rust-genai hardcoded Ollama endpoint: v0.4.4 hardcodes localhost:11434. Rewrote from_config_with_url to use ServiceTargetResolver to override endpoint at request time. 3. Model name adapter routing: rust-genai selects adapter from model name. Used openai:: namespace prefix (e.g. openai::cerebras:llama3.1-8b) to force OpenAI adapter for proxy-compatible endpoints. Config switched from Ollama to terraphim-llm-proxy (bigbox:3456) with Cerebras llama3.1-8b. 6-step prompt chain completes in ~10s vs minutes. Co-Authored-By: Terraphim AI <noreply@anthropic.com>
- Use HTTP URLs (localhost:3000) instead of file:// to avoid CORS blocking API calls - Add correct button selectors per workflow (was using wrong IDs for routing, evaluator) - Add per-workflow setup functions to fill required form inputs before triggering API calls - Handle alert() dialogs in headless mode that were silently blocking execution - Evaluator-Optimizer: generate mock content first, then trigger real /workflows/optimize API - Skip fragile comprehensive test suite page, test individual workflows directly - Add .gitignore for test artifacts (screenshots, reports, lockfile) Results: 6 passed, 0 failed, 1 skipped in 57s via Cerebras through terraphim-llm-proxy Co-Authored-By: Terraphim AI <noreply@anthropic.com>
… output generatePerspectiveAnalysis() and generateAggregatedInsights() were returning hardcoded mock data, ignoring actual API responses. Now parses LLM markdown from parallel_tasks[].result into structured UI components (title, keyPoints, insights, recommendations, confidence). Co-Authored-By: Terraphim AI <noreply@anthropic.com>
- Keep clippy-compliant doc comments in multi_agent_handlers.rs - Keep helper functions (get_role_extra_str, get_role_extra_f64) - Preserve E2E functionality while maintaining code quality - All checks pass: cargo check, cargo clippy, cargo test
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR implements end-to-end agent workflows using terraphim-llm-proxy with Cerebras LLM integration.
Changes
Testing
Checklist