Diffusion Policies for Out-of-Distribution Generalization in Offline Reinforcement Learning

📅 2023-07-10
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 21
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning suffers from distributional shift—particularly out-of-distribution (OOD) states—leading to policy generalization failure. To address this, we propose State-Reconstruction-enhanced Diffusion Policy (SRDP), the first diffusion-based policy framework that explicitly integrates state representation learning. SRDP jointly models state reconstruction and action generation, introducing an interpretable reconstruction supervision signal to enhance robustness to unseen state distributions. We introduce the first 2D multimodal contextual bandit environment supporting real-robot deployment, and achieve state-of-the-art performance on the D4RL benchmark. In sparse-reward navigation tasks, SRDP improves success rate by 167% over baselines. Real-world experiments on a UR10 robotic arm validate its OOD robustness and strong sim-to-real consistency.
📝 Abstract
Offline Reinforcement Learning (RL) methods leverage previous experiences to learn better policies than the behavior policy used for data collection. However, they face challenges handling distribution shifts due to the lack of online interaction during training. To this end, we propose a novel method named State Reconstruction for Diffusion Policies (SRDP) that incorporates state reconstruction feature learning in the recent class of diffusion policies to address the problem of out-of-distribution (OOD) generalization. Our method promotes learning of generalizable state representation to alleviate the distribution shift caused by OOD states. To illustrate the OOD generalization and faster convergence of SRDP, we design a novel 2D Multimodal Contextual Bandit environment and realize it on a 6-DoF real-world UR10 robot, as well as in simulation, and compare its performance with prior algorithms. In particular, we show the importance of the proposed state reconstruction via ablation studies. In addition, we assess the performance of our model on standard continuous control benchmarks (D4RL), namely the navigation of an 8-DoF ant and forward locomotion of half-cheetah, hopper, and walker2d, achieving state-of-the-art results. Finally, we demonstrate that our method can achieve 167% improvement over the competing baseline on a sparse continuous control navigation task where various regions of the state space are removed from the offline RL dataset, including the region encapsulating the goal.
Problem

Research questions and friction points this paper is trying to address.

Addresses out-of-distribution generalization in offline RL
Improves state representation to handle distribution shifts
Enhances performance in sparse continuous control tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

State Reconstruction for Diffusion Policies (SRDP)
Generalizable state representation learning
2D Multimodal Contextual Bandit environment
🔎 Similar Papers
No similar papers found.
S
S. E. Ada
Department of Computer Engineering, Bogazici University, Istanbul, Turkey
E
E. Oztop
SISReC, OTRI, Osaka University, Japan, and Department of Computer Science, Ozyegin University, Istanbul, Turkey
Emre Ugur
Emre Ugur
Bogazici University
Artificial IntelligenceRoboticsCognitive RoboticsRobot LearningNeuro-Symbolic Robotics