Inference-stage Adaptation-projection Strategy Adapts Diffusion Policy to Cross-manipulators Scenarios

📅 2025-09-15
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based policies exhibit limited generalization in robotic manipulation—struggling to transfer across unseen robot arms or novel tasks without costly retraining and data collection. This work proposes a zero-shot, inference-time adaptation method that enables immediate cross-hardware and dynamic-task deployment without retraining. Our approach jointly optimizes differentiable SE(3) trajectory generation and projects the outputs onto kinematic and task-specific constraints via a differentiable projection layer. Crucially, we are the first to embed physics-consistent modeling directly into the diffusion policy’s inference process, using differentiable projection to bridge vision–motor representations with real-world actuator constraints. We validate the method on multiple physical robotic platforms—including diverse manipulators and end-effectors—demonstrating high success rates and robustness across grasping, pushing, and pouring tasks. Results show substantial improvement in cross-platform deployability of diffusion policies, enabling practical real-world adaptation with no additional training.

Technology Category

Application Category

📝 Abstract
Diffusion policies are powerful visuomotor models for robotic manipulation, yet they often fail to generalize to manipulators or end-effectors unseen during training and struggle to accommodate new task requirements at inference time. Addressing this typically requires costly data recollection and policy retraining for each new hardware or task configuration. To overcome this, we introduce an adaptation-projection strategy that enables a diffusion policy to perform zero-shot adaptation to novel manipulators and dynamic task settings, entirely at inference time and without any retraining. Our method first trains a diffusion policy in SE(3) space using demonstrations from a base manipulator. During online deployment, it projects the policy's generated trajectories to satisfy the kinematic and task-specific constraints imposed by the new hardware and objectives. Moreover, this projection dynamically adapts to physical differences (e.g., tool-center-point offsets, jaw widths) and task requirements (e.g., obstacle heights), ensuring robust and successful execution. We validate our approach on real-world pick-and-place, pushing, and pouring tasks across multiple manipulators, including the Franka Panda and Kuka iiwa 14, equipped with a diverse array of end-effectors like flexible grippers, Robotiq 2F/3F grippers, and various 3D-printed designs. Our results demonstrate consistently high success rates in these cross-manipulator scenarios, proving the effectiveness and practicality of our adaptation-projection strategy. The code will be released after peer review.
Problem

Research questions and friction points this paper is trying to address.

Enables zero-shot adaptation to novel robotic manipulators
Projects generated trajectories to meet new kinematic constraints
Dynamically adapts to physical differences and task requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inference-stage adaptation-projection strategy
Zero-shot adaptation to novel manipulators
Dynamic trajectory projection for constraints
🔎 Similar Papers
No similar papers found.
Xiangtong Yao
Xiangtong Yao
Ph.D. Student, Technische Universität Mßnchen
Robot LearningRobotics
Y
Yirui Zhou
Technical University of Munich, Munich, Germany
Y
Yuan Meng
Technical University of Munich, Munich, Germany
Y
Yanwen Liu
Technical University of Munich, Munich, Germany
L
Liangyu Dong
Technical University of Munich, Munich, Germany
Z
Zitao Zhang
Technical University of Munich, Munich, Germany
Zhenshan Bing
Zhenshan Bing
Nanjing University / Technical University of Munich
Robotics
K
Kai Huang
Sun Yat-sen University, Guangzhou, China
F
Fuchun Sun
Tsinghua University, Beijing, China
Alois Knoll
Alois Knoll
Technische Universität Mßnchen
RoboticsAISensor Data FusionAutonomous DrivingCyber Physical Systems