🤖 AI Summary
To address inefficient exploration in sampling-based motion planning caused by non-uniform sampling distributions in high-dimensional configuration spaces, this paper proposes an adaptive sampling optimization framework based on Message-Passing Monte Carlo (MPMC). It is the first to jointly integrate Graph Neural Networks (GNNs) with the $L_p$-discrepancy metric to enable learnable, low-discrepancy dynamic sampling distribution modeling—significantly improving spatial coverage uniformity and sample quality. Embedded within mainstream planners such as RRT*, the method reduces required sample counts and computational overhead by 30–50% across diverse high-dimensional tasks, while simultaneously enhancing planning success rates and convergence speed. The core contribution lies in establishing an end-to-end differentiable closed loop comprising sampling, evaluation, and optimization—yielding a novel paradigm for high-dimensional motion planning that offers both theoretical guarantees and practical deployability.
📝 Abstract
Sampling-based motion planning methods, while effective in high-dimensional spaces, often suffer from inefficiencies due to irregular sampling distributions, leading to suboptimal exploration of the configuration space. In this paper, we propose an approach that enhances the efficiency of these methods by utilizing low-discrepancy distributions generated through Message-Passing Monte Carlo (MPMC). MPMC leverages Graph Neural Networks (GNNs) to generate point sets that uniformly cover the space, with uniformity assessed using the the $cL_p$-discrepancy measure, which quantifies the irregularity of sample distributions. By improving the uniformity of the point sets, our approach significantly reduces computational overhead and the number of samples required for solving motion planning problems. Experimental results demonstrate that our method outperforms traditional sampling techniques in terms of planning efficiency.