🤖 AI Summary
Current LLM-based educational applications suffer from two key limitations: supervised fine-tuning lacks dynamic adaptability to evolving student needs, while reinforcement learning (RL) approaches rely solely on answer correctness—failing to distinguish genuine conceptual understanding from rote memorization and unable to track real-time shifts in student cognitive states. To address these issues, we propose a multi-turn interactive RL framework integrating cognitive state modeling with dynamic Zone of Proximal Development (ZPD) inference. Our dual-cooperative reward mechanism comprises: (1) a *Progress Reward*, quantifying cognitive transitions from confusion to comprehension; and (2) a *Scaffold Reward*, precisely identifying optimal ZPD-aligned support. This enables real-time pedagogical strategy adaptation and continual policy optimization. Evaluated on BigMath and MathTutorBench, our method outperforms 11 baselines and matches the performance of state-of-the-art closed-source models. Code and datasets are publicly released.
📝 Abstract
Large language models (LLMs) are shifting from answer providers to intelligent tutors in educational settings, yet current supervised fine-tuning methods only learn surface teaching patterns without dynamic adaptation capabilities. Recent reinforcement learning approaches address this limitation but face two critical challenges. First, they evaluate teaching effectiveness solely based on whether students produce correct outputs, unable to distinguish whether students genuinely understand or echo teacher-provided answers during interaction. Second, they cannot perceive students'evolving cognitive states in real time through interactive dialogue, thus failing to adapt teaching strategies to match students'cognitive levels dynamically. We propose the Unidirectional Cognitive Optimization (UCO) method to address these challenges. UCO uses a multi-turn interactive reinforcement learning paradigm where the innovation lies in two synergistic reward functions: the Progress Reward captures students'cognitive advancement, evaluating whether students truly transition from confusion to comprehension, while the Scaffold Reward dynamically identifies each student's Zone of Proximal Development (ZPD), encouraging teachers to maintain productive teaching within this zone. We evaluate UCO by comparing it against 11 baseline models on BigMath and MathTutorBench benchmarks. Experimental results demonstrate that our UCO model outperforms all models of equivalent scale and achieves performance comparable to advanced closed-source models. The code and data are available at https://github.com/Mind-Lab-ECNU/UCO.