Educational Large Language Models (EduLLMs) must balance strict safety requirements with meaningful pedagogical interaction. However, adapting general-purpose LLMs to educational environments often introduces an alignment tax, where safety mechanisms implemented through external guardrails lead to excessive hard refusals that weaken instructional value. We propose RSG, a three-stage intrinsic safety alignment framework integrating EduSRAG, SFT, and GRPO. Instead of rejecting sensitive queries outright, the framework enables models to transform potentially unsafe requests into constructive pedagogical guidance.
Educational Large Language Models (EduLLMs) must balance strict safety requirements with meaningful pedagogical interaction. However, adapting general-purpose LLMs to educational environments often introduces an alignment tax, where safety mechanisms implemented through external guardrails lead to excessive hard refusals that weaken instructional value. We propose RSG, a three-stage intrinsic safety alignment framework integrating EduSRAG, SFT, and GRPO. Instead of rejecting sensitive queries outright, the framework enables models to transform potentially unsafe requests into constructive pedagogical guidance.