🤖 AI Summary
Real-world offline RL datasets are frequently corrupted by heterogeneous noise in high-dimensional states, actions, and rewards, posing a fundamental challenge to simultaneously achieving robustness and generalization. To address multi-source joint corruption, this paper proposes the first diffusion-based recovery framework for offline RL. Our approach comprises three core contributions: (1) an ambient denoising diffusion probabilistic model (Ambient DDPM), with theoretical convergence guarantees under partial corruption; (2) a noise-prediction-driven mechanism to disentangle clean and corrupted samples; and (3) a two-stage diffusion repair paradigm enabling plug-and-play integration with mainstream offline RL algorithms. Evaluated on MuJoCo, Kitchen, and Adroit benchmarks under diverse noise settings, our method consistently improves policy performance, achieves state-of-the-art results, and effectively mitigates the coupled degradation of states, actions, and rewards.
📝 Abstract
Real-world datasets collected from sensors or human inputs are prone to noise and errors, posing significant challenges for applying offline reinforcement learning (RL). While existing methods have made progress in addressing corrupted actions and rewards, they remain insufficient for handling corruption in high-dimensional state spaces and for cases where multiple elements in the dataset are corrupted simultaneously. Diffusion models, known for their strong denoising capabilities, offer a promising direction for this problem-but their tendency to overfit noisy samples limits their direct applicability. To overcome this, we propose Ambient Diffusion-Guided Dataset Recovery (ADG), a novel approach that pioneers the use of diffusion models to tackle data corruption in offline RL. First, we introduce Ambient Denoising Diffusion Probabilistic Models (DDPM) from approximated distributions, which enable learning on partially corrupted datasets with theoretical guarantees. Second, we use the noise-prediction property of Ambient DDPM to distinguish between clean and corrupted data, and then use the clean subset to train a standard DDPM. Third, we employ the trained standard DDPM to refine the previously identified corrupted data, enhancing data quality for subsequent offline RL training. A notable strength of ADG is its versatility-it can be seamlessly integrated with any offline RL algorithm. Experiments on a range of benchmarks, including MuJoCo, Kitchen, and Adroit, demonstrate that ADG effectively mitigates the impact of corrupted data and improves the robustness of offline RL under various noise settings, achieving state-of-the-art results.