RAD: Retrieval High-quality Demonstrations to Enhance Decision-making

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, data sparsity and insufficient coverage of expert trajectories hinder long-horizon planning, particularly when state transitions exhibit limited overlap, thereby constraining generalization to out-of-distribution states. To address this, we propose a retrieval-augmented diffusion-based trajectory generation framework. First, high-quality target states are dynamically retrieved based on state similarity and return estimation. Then, a conditional diffusion model generates coherent, high-return intermediate trajectory segments conditioned on the retrieved states, enabling end-to-end flexible trajectory stitching. This approach overcomes the limitations of conventional heuristic stitching methods and significantly enhances generalization to unseen states. Evaluated on multiple standard offline RL benchmarks, our method achieves performance competitive with or superior to state-of-the-art approaches, demonstrating its effectiveness and robustness in complex decision-making tasks.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (RL) enables agents to learn policies from fixed datasets, avoiding costly or unsafe environment interactions. However, its effectiveness is often limited by dataset sparsity and the lack of transition overlap between suboptimal and expert trajectories, which makes long-horizon planning particularly challenging. Prior solutions based on synthetic data augmentation or trajectory stitching often fail to generalize to novel states and rely on heuristic stitching points. To address these challenges, we propose Retrieval High-quAlity Demonstrations (RAD) for decision-making, which combines non-parametric retrieval with diffusion-based generative modeling. RAD dynamically retrieves high-return states from the offline dataset as target states based on state similarity and return estimation, and plans toward them using a condition-guided diffusion model. Such retrieval-guided generation enables flexible trajectory stitching and improves generalization when encountered with underrepresented or out-of-distribution states. Extensive experiments confirm that RAD achieves competitive or superior performance compared to baselines across diverse benchmarks, validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Addresses dataset sparsity in offline reinforcement learning
Improves long-horizon planning with retrieval-guided trajectory stitching
Enhances generalization for underrepresented or novel states
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines non-parametric retrieval with diffusion modeling
Dynamically retrieves high-return states as targets
Uses condition-guided diffusion for trajectory planning
🔎 Similar Papers
No similar papers found.
Lu Guo
Lu Guo
Bytedance/TikTok
Information ScienceAINLPcomputational social scienceLLMs
Yixiang Shan
Yixiang Shan
Jilin University
Zhengbang Zhu
Zhengbang Zhu
Shanghai Jiao Tong University
Reinforcement LearningImitation Learning
Q
Qifan Liang
Jilin University
L
Lichang Song
Jilin University
T
Ting Long
Jilin University
W
Weinan Zhang
Shanghai Jiao Tong University
Y
Yi Chang
Jilin University