🤖 AI Summary
This paper addresses key challenges in continual reinforcement learning (CRL) under realistic settings—namely, restricted state exploration, policy degradation, and high sensitivity to the discount factor γ—arising from the absence of artificial episode termination and physical robot reset. To this end, we propose Continual Soft Actor-Critic (C-SAC), a variant of SAC tailored for infinite-horizon, reset-free operation. Methodologically, C-SAC introduces an adaptive entropy regularization mechanism to counteract diminished exploration in the absence of resets, and reformulates both the reward structure and training framework to align with continual, non-episodic interaction. We evaluate C-SAC across multiple simulated benchmarks and a real-world robotic visual navigation task. Results demonstrate significantly improved long-term performance stability, markedly reduced sensitivity to γ, and recovery—or even surpassing—of standard SAC’s performance under reset-based training. This work establishes a scalable algorithmic foundation for continual autonomous learning in embodied intelligence systems.
📝 Abstract
When creating new reinforcement learning tasks, practitioners often accelerate the learning process by incorporating into the task several accessory components, such as breaking the environment interaction into independent episodes and frequently resetting the environment. Although they can enable the learning of complex intelligent behaviors, such task accessories can result in unnatural task setups and hinder long-term performance in the real world. In this work, we explore the challenges of learning without episode terminations and robot embodiment resets using the Soft Actor-Critic (SAC) algorithm. To learn without terminations, we present a continuing version of the SAC algorithm and show that, with simple modifications to the reward functions of existing tasks, continuing SAC can perform as well as or better than episodic SAC while reducing the sensitivity of performance to the value of the discount rate $γ$. On a modified Gym Reacher task, we investigate possible explanations for the failure of continuing SAC when learning without embodiment resets. Our results suggest that embodiment resets help with exploration of the state space in the SAC algorithm, and removing embodiment resets can lead to poor exploration of the state space and failure of or significantly slower learning. Finally, on additional simulated tasks and a real-robot vision task, we show that increasing the entropy of the policy when performance trends worse or remains static is an effective intervention for recovering the performance lost due to not using embodiment resets.