MC-CPO: Mastery-Conditioned Constrained Policy Optimization

πŸ“… 2026-04-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the reward hacking problem in adaptive tutoring systems caused by optimizing short-term signals, proposing a reinforcement learning approach that jointly optimizes long-term learning outcomes while respecting instructional safety constraints. By modeling teaching safety as a mastery-state-dependent set of admissible actions, the method introduces mastery-conditioned feasibility into constrained Markov decision processes for the first time. A two-timescale primal-dual algorithm is developed, integrating structured action masking with constrained policy optimization to dynamically restrict policy outputs according to the learner’s knowledge state. Theoretically, the approach is proven to strictly outperform post-hoc filtering within the feasible set and guarantees both feasibility and convergence of policy iteration. Experiments demonstrate that the method effectively satisfies safety constraints, reduces discounted safety cost, and significantly lowers the reward hacking severity index, outperforming unconstrained and reward-shaping baselines.
πŸ“ Abstract
Engagement-optimized adaptive tutoring systems may prioritize short-term behavioral signals over sustained learning outcomes, creating structural incentives for reward hacking in reinforcement learning policies. We formalize this challenge as a constrained Markov decision process (CMDP) with mastery-conditioned feasibility, in which pedagogical safety constraints dynamically restrict admissible actions according to learner mastery and prerequisite structure. We introduce Mastery-Conditioned Constrained Policy Optimization (MC-CPO), a two-timescale primal-dual algorithm that integrates structural action masking with constrained policy optimization. In the tabular regime, we establish feasibility preservation and convergence to stationary feasible points under standard stochastic approximation conditions and derive a safety gap result showing that optimization within the mastery-conditioned feasible set can strictly dominate post-hoc filtering under identical safety budgets. Empirical validation is conducted in minimal and extended tabular environments and in a neural tutoring setting. Across 10 random seeds and one million training steps in the neural regime, MC-CPO satisfies constraint budgets within tolerance, reduces discounted safety costs relative to unconstrained and reward-shaped baselines, and substantially lowers the Reward Hacking Severity Index (RHSI). These results indicate that embedding pedagogical structure directly into the feasible action space provides a principled foundation for mitigating reward hacking in instructional reinforcement learning systems.
Problem

Research questions and friction points this paper is trying to address.

reward hacking
adaptive tutoring systems
constrained reinforcement learning
pedagogical safety
mastery-conditioned feasibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

constrained reinforcement learning
reward hacking
mastery-conditioned feasibility
primal-dual algorithm
adaptive tutoring systems
πŸ”Ž Similar Papers
No similar papers found.
O
Oluseyi Olukola
School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, USA
Nick Rahimi
Nick Rahimi
Associate Professor, University of Southern Mississippi
CybersecurityTrustworthy AIDistributed SystemsP2P Network