🤖 AI Summary
This work addresses the inefficiency of pseudo-count-based anti-exploration methods in offline reinforcement learning, which suffer from the curse of dimensionality and information loss when discretizing continuous state-action spaces. To mitigate these issues, the authors propose a novel discretization mechanism that integrates a multi-codebook Vector Quantized Variational Autoencoder (VQVAE) with Fuzzy C-Means clustering. The multi-codebook architecture enhances representational capacity, while fuzzy clustering refines codebook updates to better preserve information during discretization. This approach significantly improves the stability and sample efficiency of pseudo-count anti-exploration, outperforming state-of-the-art methods across multiple challenging tasks in the D4RL benchmark while simultaneously reducing computational overhead.
📝 Abstract
Pseudo-count is an effective anti-exploration method in offline reinforcement learning (RL) by counting state-action pairs and imposing a large penalty on rare or unseen state-action pair data. Existing anti-exploration methods count continuous state-action pairs by discretizing these data, but often suffer from the issues of dimension disaster and information loss in the discretization process, leading to efficiency and performance reduction, and even failure of policy learning. In this paper, a novel anti-exploration method based on Vector Quantized Variational Autoencoder (VQVAE) and fuzzy clustering in offline RL is proposed. We first propose an efficient pseudo-count method based on the multi-codebook VQVAE to discretize state-action pairs, and design an offline RL anti-exploitation method based on the proposed pseudo-count method to handle the dimension disaster issue and improve the learning efficiency. In addition, a codebook update mechanism based on fuzzy C-means (FCM) clustering is developed to improve the use rate of vectors in codebooks, addressing the information loss issue in the discretization process. The proposed method is evaluated on the benchmark of Datasets for Deep Data-Driven Reinforcement Learning (D4RL), and experimental results show that the proposed method performs better and requires less computing cost in multiple complex tasks compared to state-of-the-art (SOTA) methods.