IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of sequence-based policies in offline reinforcement learning, which struggle to effectively leverage suboptimal experiences due to static dataset constraints and architectural shortcomings, and lack explicit planning capabilities. To overcome these challenges, the authors propose the Imaginary Planning Distillation (IPD) framework, which integrates offline planning throughout data generation, supervised training, and inference. IPD employs a world model with uncertainty quantification and a near-optimal value function to identify suboptimal trajectories, then uses model predictive control (MPC) to generate imagined optimal trajectories that augment the training data. A Transformer-based policy is trained with value-guided objectives, and during inference, the learned value function replaces handcrafted return-to-go targets. Evaluated on the D4RL benchmark, IPD significantly outperforms existing value-based and Transformer-based offline reinforcement learning methods.

Technology Category

Application Category

📝 Abstract
Decision transformer based sequential policies have emerged as a powerful paradigm in offline reinforcement learning (RL), yet their efficacy remains constrained by the quality of static datasets and inherent architectural limitations. Specifically, these models often struggle to effectively integrate suboptimal experiences and fail to explicitly plan for an optimal policy. To bridge this gap, we propose \textbf{Imaginary Planning Distillation (IPD)}, a novel framework that seamlessly incorporates offline planning into data generation, supervised training, and online inference. Our framework first learns a world model equipped with uncertainty measures and a quasi-optimal value function from the offline data. These components are utilized to identify suboptimal trajectories and augment them with reliable, imagined optimal rollouts generated via Model Predictive Control (MPC). A Transformer-based sequential policy is then trained on this enriched dataset, complemented by a value-guided objective that promotes the distillation of the optimal policy. By replacing the conventional, manually-tuned return-to-go with the learned quasi-optimal value function, IPD improves both decision-making stability and performance during inference. Empirical evaluations on the D4RL benchmark demonstrate that IPD significantly outperforms several state-of-the-art value-based and transformer-based offline RL methods across diverse tasks.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
sequential policy
suboptimal experiences
optimal policy planning
decision transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Imaginary Planning Distillation
Offline Reinforcement Learning
Model Predictive Control
Transformer-based Policy
Value-guided Distillation
🔎 Similar Papers
No similar papers found.