Summary
Questions evaluation is falling back to baseline because there are no usable graders in the questions flow.
Evidence
GET /api/questions/graders returns [] (200)
GET /api/questions/assemblies returns only asm-essay-* entries with agents: []
Repro call to POST /api/questions/grader/interaction returns baseline with:
Assembly 'asm-essay-fund1-001' has no graders
Runtime env on tutor-questions-108dev:
PROJECT_ENDPOINT = empty
MODEL_DEPLOYMENT_NAME = empty
Foundry specialist check:
Foundry project exists (.../tutor-108dev-ai-project)
model deployments exist (gpt-5, gpt-5-nano)
project currently has zero agents
Impact
Users cannot get real multi-grader evaluation on Questions; only fallback baseline feedback appears.
Proposed remediation
Restore/set AI env vars for questions (PROJECT_ENDPOINT, MODEL_DEPLOYMENT_NAME).
Re-seed question assemblies/graders so asm-quest-* exists with non-empty agents.
Add deploy guardrails to fail when AI env vars are empty.
Keep [Data Isolation][108dev] Questions assemblies endpoint includes essay assemblies #145 for cross-domain assembly contamination tracking.
Summary
Questions evaluation is falling back to baseline because there are no usable graders in the questions flow.
Evidence
GET /api/questions/gradersreturns[](200)GET /api/questions/assembliesreturns onlyasm-essay-*entries withagents: []POST /api/questions/grader/interactionreturns baseline with:Assembly 'asm-essay-fund1-001' has no graderstutor-questions-108dev:PROJECT_ENDPOINT= emptyMODEL_DEPLOYMENT_NAME= empty.../tutor-108dev-ai-project)gpt-5,gpt-5-nano)Impact
Proposed remediation
questions(PROJECT_ENDPOINT,MODEL_DEPLOYMENT_NAME).asm-quest-*exists with non-emptyagents.