A Diffusion-Refined Planner with Reinforcement Learning Priors for Confined-Space Parking

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate action modeling and low planning success rates in automatic parking within narrow spaces, this paper proposes a diffusion optimization framework integrated with reinforcement learning (RL) priors. Specifically, the method leverages the action distribution generated by a pre-trained RL policy as a prior to guide the denoising process of a diffusion model, enabling progressive refinement of action sequences in an end-to-end co-tuning manner. This work is the first to incorporate RL-derived priors into diffusion models for action sequence refinement, significantly improving modeling accuracy and robustness in extremely confined scenarios. Experiments demonstrate that the proposed method achieves state-of-the-art planning success rates across diverse constrained parking settings, reduces inference steps substantially, and maintains strong generalization performance in standard scenarios—thereby validating its effectiveness and practical applicability.

Technology Category

Application Category

📝 Abstract
The growing demand for parking has increased the need for automated parking planning methods that can operate reliably in confined spaces. In restricted and complex environments, high-precision maneuvers are required to achieve a high success rate in planning, yet existing approaches often rely on explicit action modeling, which faces challenges when accurately modeling the optimal action distribution. In this paper, we propose DRIP, a diffusion-refined planner anchored in reinforcement learning (RL) prior action distribution, in which an RL-pretrained policy provides prior action distributions to regularize the diffusion training process. During the inference phase the denoising process refines these coarse priors into more precise action distributions. By steering the denoising trajectory through the reinforcement learning prior distribution during training, the diffusion model inherits a well-informed initialization, resulting in more accurate action modeling, a higher planning success rate, and reduced inference steps. We evaluate our approach across parking scenarios with varying degrees of spatial constraints. Experimental results demonstrate that our method significantly improves planning performance in confined-space parking environments while maintaining strong generalization in common scenarios.
Problem

Research questions and friction points this paper is trying to address.

Developing automated parking planning for confined spaces with high precision maneuvers
Addressing challenges in modeling optimal action distributions using explicit methods
Improving planning success rates in spatially constrained parking environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines diffusion models with reinforcement learning priors
Refines coarse priors into precise action distributions
Enhances planning accuracy in confined parking spaces
🔎 Similar Papers
2024-03-25IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 0