SEAR: Sample Efficient Action Chunking Reinforcement Learning

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges posed by action chunking in purely online reinforcement learning, where evaluating sequences of actions—rather than individual actions—exacerbates value estimation bias and compromises data efficiency. To tackle this, the authors propose SEAR, an off-policy online reinforcement learning algorithm tailored for action chunking. SEAR explicitly models the temporal structure inherent in action chunks and introduces a receding-horizon policy to effectively balance exploration and estimation. Notably, SEAR is the first method to efficiently support action chunks as long as 20 steps within a purely online setting. Its temporal-structure-aware critic network synergistically combines the benefits of both small and large chunks, substantially improving sample efficiency and task performance. Empirical results on the MetaWorld benchmark demonstrate that SEAR outperforms current state-of-the-art online reinforcement learning approaches.

Technology Category

Application Category

📝 Abstract
Action chunking can improve exploration and value estimation in long horizon reinforcement learning, but makes learning substantially harder since the critic must evaluate action sequences rather than single actions, greatly increasing approximation and data efficiency challenges. As a result, existing action chunking methods, primarily designed for the offline and offline-to-online settings, have not achieved strong performance in purely online reinforcement learning. We introduce SEAR, an off policy online reinforcement learning algorithm for action chunking. It exploits the temporal structure of action chunks and operates with a receding horizon, effectively combining the benefits of small and large chunk sizes. SEAR outperforms state of the art online reinforcement learning methods on Metaworld, training with chunk sizes up to 20.
Problem

Research questions and friction points this paper is trying to address.

action chunking
online reinforcement learning
data efficiency
long horizon
critic approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

action chunking
online reinforcement learning
sample efficiency
receding horizon
off-policy learning
🔎 Similar Papers
No similar papers found.
C
C. F. Maximilian Nagy
Autonomous Learning Robots, Karlsruhe Institute of Technology; FZI Forschungszentrum Informatik, Karlsruhe
Onur Celik
Onur Celik
PhD Student, Karlsruhe Institute of Technology (KIT)
Robot Learning
E
Emiliyan Gospodinov
Autonomous Learning Robots, Karlsruhe Institute of Technology
F
Florian Seligmann
Autonomous Learning Robots, Karlsruhe Institute of Technology
W
Weiran Liao
Autonomous Learning Robots, Karlsruhe Institute of Technology
Aryan Kaushik
Aryan Kaushik
CIO at RakFort, Adjunct Professor at IIITD
6GWireless CommunicationsSignal ProcessingAIComputing
Gerhard Neumann
Gerhard Neumann
Professor, Karlsruhe Institute of Technology (KIT)
RoboticsMachine Learning