Should We Learn Contact-Rich Manipulation Policies from Sampling-Based Planners?

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Demonstrating high-quality, physically plausible trajectories for teleoperated dexterous manipulation in contact-rich environments remains challenging due to the difficulty of acquiring consistent, diverse, and kinematically feasible human demonstrations. Method: This paper proposes a model-driven trajectory generation framework. It first identifies the high-entropy, low-consistency behavior of sampling-based planners (e.g., RRT) in contact-rich settings; then introduces a three-stage pipeline—RRT initialization, MPC-based refinement, and diffusion-model-based resampling—to jointly ensure physical feasibility, consistency, and diversity. Furthermore, it develops a goal-conditioned diffusion behavioral cloning (DBC) policy. Results: The method achieves zero-shot hardware transfer on two challenging contact-rich manipulation tasks, outperforming conventional behavioral cloning and pure planning baselines in terms of success rate, robustness, and generalization—without requiring any real-world demonstration data.

Technology Category

Application Category

📝 Abstract
The tremendous success of behavior cloning (BC) in robotic manipulation has been largely confined to tasks where demonstrations can be effectively collected through human teleoperation. However, demonstrations for contact-rich manipulation tasks that require complex coordination of multiple contacts are difficult to collect due to the limitations of current teleoperation interfaces. We investigate how to leverage model-based planning and optimization to generate training data for contact-rich dexterous manipulation tasks. Our analysis reveals that popular sampling-based planners like rapidly exploring random tree (RRT), while efficient for motion planning, produce demonstrations with unfavorably high entropy. This motivates modifications to our data generation pipeline that prioritizes demonstration consistency while maintaining solution diversity. Combined with a diffusion-based goal-conditioned BC approach, our method enables effective policy learning and zero-shot transfer to hardware for two challenging contact-rich manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Generate training data for contact-rich dexterous manipulation tasks
Address high entropy in sampling-based planner demonstrations
Enable zero-shot transfer to hardware for complex tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage model-based planning for data generation
Modify pipeline to prioritize demonstration consistency
Use diffusion-based goal-conditioned BC approach
🔎 Similar Papers
No similar papers found.