🤖 AI Summary
This work addresses the scalability limitations of traditional self-paced curriculum learning in high-dimensional context spaces, where inner-loop optimization is computationally prohibitive. The authors propose SPGL, a novel method that introduces, for the first time, a closed-form update rule for Gaussian context distributions, enabling efficient curriculum generation without numerical optimization. This approach significantly reduces computational overhead while maintaining sample efficiency and comes with theoretical convergence guarantees. Empirical evaluations on benchmark tasks—including Point Mass, Lunar Lander, and Ball Catching—demonstrate that SPGL matches or exceeds the performance of existing methods, with particularly strong results in partially observable settings involving hidden contexts. Moreover, the learned context distributions exhibit notably improved stability during convergence.
📝 Abstract
Curriculum learning improves reinforcement learning (RL) efficiency by sequencing tasks from simple to complex. However, many self-paced curriculum methods rely on computationally expensive inner-loop optimizations, limiting their scalability in high-dimensional context spaces. In this paper, we propose Self-Paced Gaussian Curriculum Learning (SPGL), a novel approach that avoids costly numerical procedures by leveraging a closed-form update rule for Gaussian context distributions. SPGL maintains the sample efficiency and adaptability of traditional self-paced methods while substantially reducing computational overhead. We provide theoretical guarantees on convergence and validate our method across several contextual RL benchmarks, including the Point Mass, Lunar Lander, and Ball Catching environments. Experimental results show that SPGL matches or outperforms existing curriculum methods, especially in hidden context scenarios, and achieves more stable context distribution convergence. Our method offers a scalable, principled alternative for curriculum generation in challenging continuous and partially observable domains.