DEAS: DEtached value learning with Action Sequence for Scalable Offline RL

πŸ“… 2025-10-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Offline reinforcement learning faces significant challenges in long-horizon sequential decision-making tasks, including severe value estimation bias and excessively long planning horizons. To address these issues, we propose DEASβ€”a novel framework grounded in semi-Markov decision processes (SMDPs), where action sequences serve as atomic decision units, drastically compressing the effective planning horizon. DEAS introduces detached value learning, explicitly decoupling Q-network updates from target value computation to mitigate overestimation bias inherent in offline datasets. By integrating action-sequence modeling, offline behavior distribution regularization, and an actor-critic architecture, DEAS enables stable and efficient training. Extensive evaluations on OGBench, RoboCasa simulation, and real-world robotic manipulation tasks demonstrate consistent superiority over state-of-the-art methods. Moreover, DEAS significantly enhances the execution performance of large-scale vision-language-action models.

Technology Category

Application Category

πŸ“ Abstract
Offline reinforcement learning (RL) presents an attractive paradigm for training intelligent agents without expensive online interactions. However, current approaches still struggle with complex, long-horizon sequential decision making. In this work, we introduce DEtached value learning with Action Sequence (DEAS), a simple yet effective offline RL framework that leverages action sequences for value learning. These temporally extended actions provide richer information than single-step actions and can be interpreted through the options framework via semi-Markov decision process Q-learning, enabling reduction of the effective planning horizon by considering longer sequences at once. However, directly adopting such sequences in actor-critic algorithms introduces excessive value overestimation, which we address through detached value learning that steers value estimates toward in-distribution actions that achieve high return in the offline dataset. We demonstrate that DEAS consistently outperforms baselines on complex, long-horizon tasks from OGBench and can be applied to enhance the performance of large-scale Vision-Language-Action models that predict action sequences, significantly boosting performance in both RoboCasa Kitchen simulation tasks and real-world manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalable offline RL for long-horizon sequential decision making
Reduces value overestimation in actor-critic algorithms with action sequences
Enhances Vision-Language-Action models' performance in simulation and real-world tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses action sequences for value learning
Applies semi-Markov decision process Q-learning
Implements detached value learning to reduce overestimation
πŸ”Ž Similar Papers
No similar papers found.