🤖 AI Summary
Offline reinforcement learning suffers from two interrelated challenges: distributional shift and degradation of the reference policy due to low-quality data, resulting in poor sample efficiency and suboptimal performance for policy-constrained methods. To address this, we propose an episode-level scoring mechanism for high-quality trajectory selection: for the first time, we jointly score state-transition samples using both episode-average reward and discounted return, followed by lightweight threshold-based filtering to discard low-quality segments. This mechanism is seamlessly integrated into policy-constrained frameworks (e.g., CQL, BCQ), preserving behavioral cloning constraints while enhancing offline optimization stability. Evaluated on the D4RL benchmark, our approach significantly outperforms mainstream baselines—accelerating training by 37% and improving final policy returns by 22% on average—demonstrating its dual advantages in generalizability and practical efficacy.
📝 Abstract
Offline reinforcement learning (RL) aims to learn a policy that maximizes the expected return using a given static dataset of transitions. However, offline RL faces the distribution shift problem. The policy constraint offline RL method is proposed to solve the distribution shift problem. During the policy constraint offline RL training, it is important to ensure the difference between the learned policy and behavior policy within a given threshold. Thus, the learned policy heavily relies on the quality of the behavior policy. However, a problem exists in existing policy constraint methods: if the dataset contains many low-reward transitions, the learned will be contained with a suboptimal reference policy, leading to slow learning speed, low sample efficiency, and inferior performances. This paper shows that the sampling method in policy constraint offline RL that uses all the transitions in the dataset can be improved. A simple but efficient sample filtering method is proposed to improve the sample efficiency and the final performance. First, we evaluate the score of the transitions by average reward and average discounted reward of episodes in the dataset and extract the transition samples of high scores. Second, the high-score transition samples are used to train the offline RL algorithms. We verify the proposed method in a series of offline RL algorithms and benchmark tasks. Experimental results show that the proposed method outperforms baselines.