🤖 AI Summary
Addressing motion planning for redundant manipulators in high-dimensional dynamic environments—such as manufacturing, surgical robotics, and human–robot collaboration—remains challenging due to poor generalization of traditional methods and the inability of existing deep learning approaches to simultaneously ensure accuracy, efficiency, and physical feasibility. This paper proposes the first diffusion-model-based end-to-end planning framework: it replaces U-Net with an encoder-only Transformer to explicitly model temporal dependencies in joint-space trajectories; integrates point-cloud perception with explicit physical constraints (dynamics, kinematics, and collision avoidance); and introduces the first large-scale dataset comprising 35 million robot poses and 140K obstacle-rich scenes. In complex simulated environments, our method reduces collision rate by 42% and accelerates inference by 3.1× over state-of-the-art baselines, while generating smoother, physically feasible trajectories. The code and dataset are publicly released.
📝 Abstract
Redundant manipulators, with their higher Degrees of Freedom (DOFs), offer enhanced kinematic performance and versatility, making them suitable for applications like manufacturing, surgical robotics, and human-robot collaboration. However, motion planning for these manipulators is challenging due to increased DOFs and complex, dynamic environments. While traditional motion planning algorithms struggle with high-dimensional spaces, deep learning-based methods often face instability and inefficiency in complex tasks. This paper introduces RobotDiffuse, a diffusion model-based approach for motion planning in redundant manipulators. By integrating physical constraints with a point cloud encoder and replacing the U-Net structure with an encoder-only transformer, RobotDiffuse improves the model's ability to capture temporal dependencies and generate smoother, more coherent motion plans. We validate the approach using a complex simulator, and release a new dataset with 35M robot poses and 0.14M obstacle avoidance scenarios. Experimental results demonstrate the effectiveness of RobotDiffuse and the promise of diffusion models for motion planning tasks. The code can be accessed at https://github.com/ACRoboT-buaa/RobotDiffuse.