🤖 AI Summary
This work addresses the challenges posed by action chunking in purely online reinforcement learning, where evaluating sequences of actions—rather than individual actions—exacerbates value estimation bias and compromises data efficiency. To tackle this, the authors propose SEAR, an off-policy online reinforcement learning algorithm tailored for action chunking. SEAR explicitly models the temporal structure inherent in action chunks and introduces a receding-horizon policy to effectively balance exploration and estimation. Notably, SEAR is the first method to efficiently support action chunks as long as 20 steps within a purely online setting. Its temporal-structure-aware critic network synergistically combines the benefits of both small and large chunks, substantially improving sample efficiency and task performance. Empirical results on the MetaWorld benchmark demonstrate that SEAR outperforms current state-of-the-art online reinforcement learning approaches.
📝 Abstract
Action chunking can improve exploration and value estimation in long horizon reinforcement learning, but makes learning substantially harder since the critic must evaluate action sequences rather than single actions, greatly increasing approximation and data efficiency challenges. As a result, existing action chunking methods, primarily designed for the offline and offline-to-online settings, have not achieved strong performance in purely online reinforcement learning. We introduce SEAR, an off policy online reinforcement learning algorithm for action chunking. It exploits the temporal structure of action chunks and operates with a receding horizon, effectively combining the benefits of small and large chunk sizes. SEAR outperforms state of the art online reinforcement learning methods on Metaworld, training with chunk sizes up to 20.