Sample-Efficient Policy Constraint Offline Deep Reinforcement Learning based on Sample Filtering

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning suffers from two interrelated challenges: distributional shift and degradation of the reference policy due to low-quality data, resulting in poor sample efficiency and suboptimal performance for policy-constrained methods. To address this, we propose an episode-level scoring mechanism for high-quality trajectory selection: for the first time, we jointly score state-transition samples using both episode-average reward and discounted return, followed by lightweight threshold-based filtering to discard low-quality segments. This mechanism is seamlessly integrated into policy-constrained frameworks (e.g., CQL, BCQ), preserving behavioral cloning constraints while enhancing offline optimization stability. Evaluated on the D4RL benchmark, our approach significantly outperforms mainstream baselines—accelerating training by 37% and improving final policy returns by 22% on average—demonstrating its dual advantages in generalizability and practical efficacy.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) aims to learn a policy that maximizes the expected return using a given static dataset of transitions. However, offline RL faces the distribution shift problem. The policy constraint offline RL method is proposed to solve the distribution shift problem. During the policy constraint offline RL training, it is important to ensure the difference between the learned policy and behavior policy within a given threshold. Thus, the learned policy heavily relies on the quality of the behavior policy. However, a problem exists in existing policy constraint methods: if the dataset contains many low-reward transitions, the learned will be contained with a suboptimal reference policy, leading to slow learning speed, low sample efficiency, and inferior performances. This paper shows that the sampling method in policy constraint offline RL that uses all the transitions in the dataset can be improved. A simple but efficient sample filtering method is proposed to improve the sample efficiency and the final performance. First, we evaluate the score of the transitions by average reward and average discounted reward of episodes in the dataset and extract the transition samples of high scores. Second, the high-score transition samples are used to train the offline RL algorithms. We verify the proposed method in a series of offline RL algorithms and benchmark tasks. Experimental results show that the proposed method outperforms baselines.
Problem

Research questions and friction points this paper is trying to address.

Improves sample efficiency in offline reinforcement learning.
Addresses distribution shift by filtering high-quality transitions.
Enhances policy learning by selecting high-reward dataset samples.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Filters high-score transitions using reward metrics
Improves sample efficiency in offline reinforcement learning
Enhances policy performance by selective dataset training
🔎 Similar Papers
No similar papers found.
Yuanhao Chen
Yuanhao Chen
Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics and School of Intelligence Science and Engineering, the Harbin Institute of Technology Shenzhen, Shenzhen, 518055, China.
Q
Qi Liu
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110819, China.
P
Pengbin Chen
Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics and School of Intelligence Science and Engineering, the Harbin Institute of Technology Shenzhen, Shenzhen, 518055, China.
Zhongjian Qiao
Zhongjian Qiao
Tsinghua University
Reinforcement LearningDeep Learning
Y
Yanjie Li
Guangdong Key Laboratory of Intelligent Morphing Mechanisms and Adaptive Robotics and School of Intelligence Science and Engineering, the Harbin Institute of Technology Shenzhen, Shenzhen, 518055, China.