Learning Pivoting Manipulation with Force and Vision Feedback Using Optimization-based Demonstrations

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-grasping rotational manipulation faces challenges in modeling complex object–environment–robot contact interactions, lacking privileged information (e.g., mass, pose), and suffering from inefficient simulation-to-reality (Sim2Real) transfer. Method: We propose a demonstration-guided deep reinforcement learning framework: (1) robust initial demonstrations are generated via Contact-Implicit Trajectory Optimization (CITO); (2) closed-loop control integrates proprioceptive, visual, and force-tactile modalities; and (3) privileged information is used exclusively during simulation training, coupled with a lightweight transfer strategy to enhance real-world generalization. Contribution/Results: Our approach achieves efficient Sim2Real transfer for flipping diverse unknown objects, significantly improving sample efficiency. It ensures operational stability and robustness without requiring precise physical modeling—demonstrating strong generalization across unseen objects and environments.

Technology Category

Application Category

📝 Abstract
Non-prehensile manipulation is challenging due to complex contact interactions between objects, the environment, and robots. Model-based approaches can efficiently generate complex trajectories of robots and objects under contact constraints. However, they tend to be sensitive to model inaccuracies and require access to privileged information (e.g., object mass, size, pose), making them less suitable for novel objects. In contrast, learning-based approaches are typically more robust to modeling errors but require large amounts of data. In this paper, we bridge these two approaches to propose a framework for learning closed-loop pivoting manipulation. By leveraging computationally efficient Contact-Implicit Trajectory Optimization (CITO), we design demonstration-guided deep Reinforcement Learning (RL), leading to sample-efficient learning. We also present a sim-to-real transfer approach using a privileged training strategy, enabling the robot to perform pivoting manipulation using only proprioception, vision, and force sensing without access to privileged information. Our method is evaluated on several pivoting tasks, demonstrating that it can successfully perform sim-to-real transfer.
Problem

Research questions and friction points this paper is trying to address.

Bridging model-based and learning-based approaches for pivoting manipulation
Overcoming sensitivity to model inaccuracies and lack of privileged information
Enabling robust sim-to-real transfer using vision and force feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines model-based and learning-based manipulation approaches
Uses Contact-Implicit Trajectory Optimization for demonstrations
Enables sim-to-real transfer with vision and force sensing
🔎 Similar Papers
No similar papers found.