Object Pose Estimation through Dexterous Touch

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of object pose estimation under visually degraded conditions (e.g., poor illumination, occlusion, or textureless surfaces), this paper proposes a bimanual robotic pose estimation method based on active tactile exploration. Leveraging reinforcement learning, the dual-arm system collaboratively executes adaptive tactile interactions to acquire local contact point clouds in real time. Integrating motion priors within an iterative optimization framework, the method jointly estimates both object shape and full 6-DoF pose—without requiring prior geometric knowledge of the object. Key contributions include: (1) a sensor-motion-coupled active exploration strategy that autonomously identifies discriminative contact features; and (2) high-precision pose estimation from tactile input alone, achieving average translation error <1.2 cm and rotation error <3.5°. Experiments demonstrate the method’s effectiveness and generalization capability in completing complex perception tasks under complete absence of visual feedback.

Technology Category

Application Category

📝 Abstract
Robust object pose estimation is essential for manipulation and interaction tasks in robotics, particularly in scenarios where visual data is limited or sensitive to lighting, occlusions, and appearances. Tactile sensors often offer limited and local contact information, making it challenging to reconstruct the pose from partial data. Our approach uses sensorimotor exploration to actively control a robot hand to interact with the object. We train with Reinforcement Learning (RL) to explore and collect tactile data. The collected 3D point clouds are used to iteratively refine the object's shape and pose. In our setup, one hand holds the object steady while the other performs active exploration. We show that our method can actively explore an object's surface to identify critical pose features without prior knowledge of the object's geometry. Supplementary material and more demonstrations will be provided at https://amirshahid.github.io/BimanualTactilePose .
Problem

Research questions and friction points this paper is trying to address.

Estimating object pose using touch when vision is limited
Overcoming partial tactile data for accurate pose reconstruction
Exploring unknown object geometry through active tactile interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for tactile exploration
Refines object pose with 3D point clouds
Employs bimanual robot hand interaction
🔎 Similar Papers
No similar papers found.
A
Amir-Hossein Shahidzadeh
University of Maryland, College Park
Jiyue Zhu
Jiyue Zhu
University of California, San Diego
Robotics
K
Kezhou Chen
UC San Diego
Sha Yi
Sha Yi
UC San Diego
Robotics
C
Cornelia Fermüller
University of Maryland, College Park
Y
Yiannis Aloimonos
University of Maryland, College Park
X
Xiaolong Wang
UC San Diego