Simulated Human Learning in a Dynamic, Partially-Observed, Time-Series Environment

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of real-time student modeling in intelligent tutoring systems under dynamic, partially observable time-series settings. Methodologically, it constructs a pedagogical process simulation environment and formalizes latent student state inference as a Partially Observable Markov Decision Process (POMDP). It introduces a tunable active probing intervention mechanism that achieves Pareto-optimal trade-offs between information gain and instructional disruption, and employs combined reinforcement learning and heuristic rule-based policy learning. Experiments demonstrate: (i) significantly improved accuracy in latent state estimation; (ii) strong policy generalizability across multi-stage assessments; and (iii) robust adaptation to heterogeneous student populations—though performance degrades slightly in extremely high-difficulty cohorts. The core contribution is the first integration of controllable active probing into the POMDP framework for education, enabling efficient, low-disturbance, personalized instructional decision-making in partially observable learning environments.

Technology Category

Application Category

📝 Abstract
While intelligent tutoring systems (ITSs) can use information from past students to personalize instruction, each new student is unique. Moreover, the education problem is inherently difficult because the learning process is only partially observable. We therefore develop a dynamic, time-series environment to simulate a classroom setting, with student-teacher interventions - including tutoring sessions, lectures, and exams. In particular, we design the simulated environment to allow for varying levels of probing interventions that can gather more information. Then, we develop reinforcement learning ITSs that combine learning the individual state of students while pulling from population information through the use of probing interventions. These interventions can reduce the difficulty of student estimation, but also introduce a cost-benefit decision to find a balance between probing enough to get accurate estimates and probing so often that it becomes disruptive to the student. We compare the efficacy of standard RL algorithms with several greedy rules-based heuristic approaches to find that they provide different solutions, but with similar results. We also highlight the difficulty of the problem with increasing levels of hidden information, and the boost that we get if we allow for probing interventions. We show the flexibility of both heuristic and RL policies with regards to changing student population distributions, finding that both are flexible, but RL policies struggle to help harder classes. Finally, we test different course structures with non-probing policies and we find that our policies are able to boost the performance of quiz and midterm structures more than we can in a finals-only structure, highlighting the benefit of having additional information.
Problem

Research questions and friction points this paper is trying to address.

Developing reinforcement learning tutors for partially observable student learning states
Balancing probing interventions between information gathering and student disruption
Testing policies across different course structures and student population distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic time-series environment simulates classroom interventions
Reinforcement learning combines individual and population student data
Probing interventions balance information gathering with student disruption
🔎 Similar Papers
No similar papers found.
Jeffrey Jiang
Jeffrey Jiang
UCLA
machine learningcausal inference
K
Kevin Hong
Department of Electrical and Computer Engineering, University of California, Los Angeles
E
Emily Kuczynski
Department of Electrical and Computer Engineering, University of California, Los Angeles
G
Gregory Pottie
Department of Electrical and Computer Engineering, University of California, Los Angeles