🤖 AI Summary
This work addresses the challenge of tactile-based object pose estimation for robotic grasping under visual occlusion by proposing a tactile localization framework that requires neither simulation data nor pretrained models. The method formulates tactile localization as a one-shot point cloud registration problem, leveraging dense tactile point clouds and their surface normals acquired through tactile sensing. By integrating normal-guided graph pruning with a hypothesis verification mechanism, it achieves efficient and robust registration from partial tactile observations to complete object models. Experimental evaluations on the YCB dataset using two real visuo-tactile sensors demonstrate that the proposed approach outperforms existing methods in terms of generalization, computational efficiency, and pose estimation accuracy.
📝 Abstract
Pose estimation is essential for robotic manipulation, particularly when visual perception is occluded during gripper-object interactions. Existing tactile-based methods generally rely on tactile simulation or pre-trained models, which limits their generalizability and efficiency. In this study, we propose TacLoc, a novel tactile localization framework that formulates the problem as a one-shot point cloud registration task. TacLoc introduces a graph-theoretic partial-to-full registration method, leveraging dense point clouds and surface normals from tactile sensing for efficient and accurate pose estimation. Without requiring rendered data or pre-trained models, TacLoc achieves improved performance through normal-guided graph pruning and a hypothesis-and-verification pipeline. TacLoc is evaluated extensively on the YCB dataset. We further demonstrate TacLoc on real-world objects across two different visual-tactile sensors.