π€ AI Summary
Offline reinforcement learning faces significant challenges in long-horizon sequential decision-making tasks, including severe value estimation bias and excessively long planning horizons. To address these issues, we propose DEASβa novel framework grounded in semi-Markov decision processes (SMDPs), where action sequences serve as atomic decision units, drastically compressing the effective planning horizon. DEAS introduces detached value learning, explicitly decoupling Q-network updates from target value computation to mitigate overestimation bias inherent in offline datasets. By integrating action-sequence modeling, offline behavior distribution regularization, and an actor-critic architecture, DEAS enables stable and efficient training. Extensive evaluations on OGBench, RoboCasa simulation, and real-world robotic manipulation tasks demonstrate consistent superiority over state-of-the-art methods. Moreover, DEAS significantly enhances the execution performance of large-scale vision-language-action models.
π Abstract
Offline reinforcement learning (RL) presents an attractive paradigm for training intelligent agents without expensive online interactions. However, current approaches still struggle with complex, long-horizon sequential decision making. In this work, we introduce DEtached value learning with Action Sequence (DEAS), a simple yet effective offline RL framework that leverages action sequences for value learning. These temporally extended actions provide richer information than single-step actions and can be interpreted through the options framework via semi-Markov decision process Q-learning, enabling reduction of the effective planning horizon by considering longer sequences at once. However, directly adopting such sequences in actor-critic algorithms introduces excessive value overestimation, which we address through detached value learning that steers value estimates toward in-distribution actions that achieve high return in the offline dataset. We demonstrate that DEAS consistently outperforms baselines on complex, long-horizon tasks from OGBench and can be applied to enhance the performance of large-scale Vision-Language-Action models that predict action sequences, significantly boosting performance in both RoboCasa Kitchen simulation tasks and real-world manipulation tasks.