UCO: A Multi-Turn Interactive Reinforcement Learning Method for Adaptive Teaching with Large Language Models

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-based educational applications suffer from two key limitations: supervised fine-tuning lacks dynamic adaptability to evolving student needs, while reinforcement learning (RL) approaches rely solely on answer correctness—failing to distinguish genuine conceptual understanding from rote memorization and unable to track real-time shifts in student cognitive states. To address these issues, we propose a multi-turn interactive RL framework integrating cognitive state modeling with dynamic Zone of Proximal Development (ZPD) inference. Our dual-cooperative reward mechanism comprises: (1) a *Progress Reward*, quantifying cognitive transitions from confusion to comprehension; and (2) a *Scaffold Reward*, precisely identifying optimal ZPD-aligned support. This enables real-time pedagogical strategy adaptation and continual policy optimization. Evaluated on BigMath and MathTutorBench, our method outperforms 11 baselines and matches the performance of state-of-the-art closed-source models. Code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are shifting from answer providers to intelligent tutors in educational settings, yet current supervised fine-tuning methods only learn surface teaching patterns without dynamic adaptation capabilities. Recent reinforcement learning approaches address this limitation but face two critical challenges. First, they evaluate teaching effectiveness solely based on whether students produce correct outputs, unable to distinguish whether students genuinely understand or echo teacher-provided answers during interaction. Second, they cannot perceive students'evolving cognitive states in real time through interactive dialogue, thus failing to adapt teaching strategies to match students'cognitive levels dynamically. We propose the Unidirectional Cognitive Optimization (UCO) method to address these challenges. UCO uses a multi-turn interactive reinforcement learning paradigm where the innovation lies in two synergistic reward functions: the Progress Reward captures students'cognitive advancement, evaluating whether students truly transition from confusion to comprehension, while the Scaffold Reward dynamically identifies each student's Zone of Proximal Development (ZPD), encouraging teachers to maintain productive teaching within this zone. We evaluate UCO by comparing it against 11 baseline models on BigMath and MathTutorBench benchmarks. Experimental results demonstrate that our UCO model outperforms all models of equivalent scale and achieves performance comparable to advanced closed-source models. The code and data are available at https://github.com/Mind-Lab-ECNU/UCO.
Problem

Research questions and friction points this paper is trying to address.

Evaluating teaching effectiveness beyond student answer correctness
Perceiving students' evolving cognitive states during interactive dialogue
Adapting teaching strategies to match students' cognitive levels dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-turn interactive reinforcement learning for adaptive teaching
Progress Reward captures student cognitive advancement
Scaffold Reward identifies Zone of Proximal Development
🔎 Similar Papers
No similar papers found.
S
Shouang Wei
East China Normal University, School of Computer Science and Technology
M
Min Zhang
East China Normal University, Shanghai Institute of Al for Education
X
Xin Lin
East China Normal University, Shanghai Institute of Al for Education
B
Bo Jiang
East China Normal University, Shanghai Institute of Al for Education
Kun Kuang
Kun Kuang
Zhejiang University
Causal InferenceData MiningMachine Learning
Zhongxiang Dai
Zhongxiang Dai
Assistant Professor, The Chinese University of Hong Kong, Shenzhen
Machine LearningData-Centric AILarge Language ModelsMulti-Armed BanditsBayesian Optimization