🤖 AI Summary
To address the challenge of out-of-distribution action selection caused by distributional shift in offline reinforcement learning, this paper proposes a Wasserstein distance-regularized policy optimization method. The approach models structural-preserving mappings between state-action joint distributions using optimal transport theory and directly parameterizes the Wasserstein distance via input-convex neural networks (ICNNs), thereby avoiding the instability of density-ratio estimation and the need for adversarial discriminator training. This enables end-to-end, model-free, and stable policy learning solely from static datasets. Evaluated on the D4RL benchmark, the method achieves performance competitive with or superior to state-of-the-art offline RL algorithms, demonstrating its effectiveness, robustness, and generalization capability across diverse tasks and datasets.
📝 Abstract
Offline reinforcement learning (RL) aims to learn an optimal policy from a static dataset, making it particularly valuable in scenarios where data collection is costly, such as robotics. A major challenge in offline RL is distributional shift, where the learned policy deviates from the dataset distribution, potentially leading to unreliable out-of-distribution actions. To mitigate this issue, regularization techniques have been employed. While many existing methods utilize density ratio-based measures, such as the $f$-divergence, for regularization, we propose an approach that utilizes the Wasserstein distance, which is robust to out-of-distribution data and captures the similarity between actions. Our method employs input-convex neural networks (ICNNs) to model optimal transport maps, enabling the computation of the Wasserstein distance in a discriminator-free manner, thereby avoiding adversarial training and ensuring stable learning. Our approach demonstrates comparable or superior performance to widely used existing methods on the D4RL benchmark dataset. The code is available at https://github.com/motokiomura/Q-DOT .