🤖 AI Summary
This work addresses the high computational cost of diffusion models in real-time robotic motion planning, where balancing generation quality and inference speed remains challenging. Methodologically: (1) an image-to-motion mapping network is proposed to directly predict approximate trajectories; (2) an uncertainty-aware anisotropic noise optimizer is introduced—novelly adapting the noise covariance matrix based on per-element motion confidence; (3) single-step diffusion is first coupled with explicit motion prediction to enable end-to-end real-time planning. Evaluated across multiple benchmark tasks, the method achieves >30 Hz inference speed while preserving trajectory smoothness and environmental adaptability—significantly outperforming existing state-of-the-art approaches.
📝 Abstract
This paper proposes an image-based robot motion planning method using a one-step diffusion model. While the diffusion model allows for high-quality motion generation, its computational cost is too expensive to control a robot in real time. To achieve high quality and efficiency simultaneously, our one-step diffusion model takes an approximately generated motion, which is predicted directly from input images. This approximate motion is optimized by additive noise provided by our novel noise optimizer. Unlike general isotropic noise, our noise optimizer adjusts noise anisotropically depending on the uncertainty of each motion element. Our experimental results demonstrate that our method outperforms state-of-the-art methods while maintaining its efficiency by one-step diffusion.