🤖 AI Summary
Catastrophic forgetting induced by time-varying external contexts in non-stationary environments poses a fundamental challenge for online reinforcement learning (RL).
Method: We propose Local Constraint Policy Optimization (LCPO), a task-agnostic method that anchors policy outputs on past experiences without task labels, and jointly optimizes current return and historical knowledge retention via cross-context experience sampling and local KL-divergence constraints. LCPO operates within an online policy gradient framework, requiring neither environment resets nor experience replay.
Contribution/Results: Evaluated on MuJoCo, classical control, and real-world system benchmarks, LCPO significantly outperforms existing online and offline RL methods, achieving performance close to that of an oracle agent trained offline on the full dataset. To our knowledge, LCPO is the first approach to enable robust adaptation to dynamic contextual non-stationarity under purely online settings.
📝 Abstract
We study online reinforcement learning (RL) in non-stationary environments, where a time-varying exogenous context process affects the environment dynamics. Online RL is challenging in such environments due to"catastrophic forgetting"(CF). The agent tends to forget prior knowledge as it trains on new experiences. Prior approaches to mitigate this issue assume task labels (which are often not available in practice) or use off-policy methods that suffer from instability and poor performance. We present Locally Constrained Policy Optimization (LCPO), an online RL approach that combats CF by anchoring policy outputs on old experiences while optimizing the return on current experiences. To perform this anchoring, LCPO locally constrains policy optimization using samples from experiences that lie outside of the current context distribution. We evaluate LCPO in Mujoco, classic control and computer systems environments with a variety of synthetic and real context traces, and find that it outperforms state-of-the-art on-policy and off-policy RL methods in the non-stationary setting, while achieving results on-par with an"oracle"agent trained offline across all context traces.