🤖 AI Summary
Diffusion models for robotic trajectory planning suffer from reliance on expert demonstrations, low data efficiency, and theoretical suboptimality. This paper introduces PegasusFlow: a hierarchical, receding-horizon denoising framework that operates without expert demonstrations, enabling end-to-end trajectory optimization via parallel, environment-interaction-driven sampling of trajectory gradients. Its key contributions are: (1) the Weighted Basis Function Optimization (WBFO) algorithm, which integrates spline-based trajectory parameterization with asynchronous parallel simulation to significantly improve sampling efficiency and convergence speed; and (2) a novel diffusion strategy unifying flow matching and score-based sampling, incorporating an MPPI-inspired replacement mechanism and RL-based warm-start initialization. Evaluated on navigation and obstacle traversal tasks, PegasusFlow achieves 100% success rate—outperforming the best prior method by 18% in runtime—and supports large-scale parallel rollouts in complex terrains.
📝 Abstract
Diffusion models offer powerful generative capabilities for robot trajectory planning, yet their practical deployment on robots is hindered by a critical bottleneck: a reliance on imitation learning from expert demonstrations. This paradigm is often impractical for specialized robots where data is scarce and creates an inefficient, theoretically suboptimal training pipeline. To overcome this, we introduce PegasusFlow, a hierarchical rolling-denoising framework that enables direct and parallel sampling of trajectory score gradients from environmental interaction, completely bypassing the need for expert data. Our core innovation is a novel sampling algorithm, Weighted Basis Function Optimization (WBFO), which leverages spline basis representations to achieve superior sample efficiency and faster convergence compared to traditional methods like MPPI. The framework is embedded within a scalable, asynchronous parallel simulation architecture that supports massively parallel rollouts for efficient data collection. Extensive experiments on trajectory optimization and robotic navigation tasks demonstrate that our approach, particularly Action-Value WBFO (AVWBFO) combined with a reinforcement learning warm-start, significantly outperforms baselines. In a challenging barrier-crossing task, our method achieved a 100% success rate and was 18% faster than the next-best method, validating its effectiveness for complex terrain locomotion planning. https://masteryip.github.io/pegasusflow.github.io/