Evolutionary Reinforcement Learning based AI tutor for Socratic Interdisciplinary Instruction

πŸ“… 2025-12-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses three core challenges in building Socratic interdisciplinary AI tutors: implicit student cognitive states, sparse and delayed educational rewards, and policy monotonicity. To this end, we propose a structured POMDP-based pedagogical modeling framework. Methodologically, we introduce an Evolutionary Reinforcement Learning (ERL) paradigm integrated with PPO, develop a knowledge-graph-driven dynamic student simulator, design a hierarchical dense reward mechanism, and propose LoRA-Divisionβ€”a collaborative optimization strategy to enhance policy diversity. Evaluated in STEM interdisciplinary education scenarios, our system achieves significant improvements: +32.7% in long-term reasoning coherence over SFT/RL baselines and a 3.8Γ— increase in policy diversity. These advances establish a new paradigm for explainable, adaptive, and robust educational AI systems.

Technology Category

Application Category

πŸ“ Abstract
Cultivating higher-order cognitive abilities -- such as knowledge integration, critical thinking, and creativity -- in modern STEM education necessitates a pedagogical shift from passive knowledge transmission to active Socratic construction. Although Large Language Models (LLMs) hold promise for STEM Interdisciplinary education, current methodologies employing Prompt Engineering (PE), Supervised Fine-tuning (SFT), or standard Reinforcement Learning (RL) often fall short of supporting this paradigm. Existing methods are hindered by three fundamental challenges: the inability to dynamically model latent student cognitive states; severe reward sparsity and delay inherent in long-term educational goals; and a tendency toward policy collapse lacking strategic diversity due to reliance on behavioral cloning. Recognizing the unobservability and dynamic complexity of these interactions, we formalize the Socratic Interdisciplinary Instructional Problem (SIIP) as a structured Partially Observable Markov Decision Process (POMDP), demanding simultaneous global exploration and fine-grained policy refinement. To this end, we propose ERL4SIIP, a novel Evolutionary Reinforcement Learning (ERL) framework specifically tailored for this domain. ERL4SIIP integrates: (1) a dynamic student simulator grounded in a STEM knowledge graph for latent state modeling; (2) a Hierarchical Reward Mechanism that decomposes long-horizon goals into dense signals; and (3) a LoRA-Division based optimization strategy coupling evolutionary algorithms for population-level global search with PPO for local gradient ascent.
Problem

Research questions and friction points this paper is trying to address.

Dynamically model latent student cognitive states in Socratic instruction
Overcome severe reward sparsity and delay in long-term educational goals
Prevent policy collapse and enhance strategic diversity in AI tutoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary Reinforcement Learning framework for AI tutor
Dynamic student simulator using STEM knowledge graph
Hierarchical Reward Mechanism with LoRA-Division optimization
πŸ”Ž Similar Papers
No similar papers found.
M
Mei Jiang
East China Normal University
H
Haihai Shen
East China Normal University
Z
Zhuo Luo
East China Normal University
Bingdong Li
Bingdong Li
East China Normal University
evolutionary computationmachine learningblack-box optimization
Wenjing Hong
Wenjing Hong
Shenzhen University
K
Ke Tang
Southern University of Science and Technology
A
Aimin Zhou
East China Normal University