🤖 AI Summary
This work addresses the challenge of jointly estimating object pose and external contact points during dexterous manipulation. We propose an object-centric neural implicit representation that fuses high-resolution tactile sensing with vision. Our method explicitly models object geometry as a signed distance field (SDF) and parameterizes tactile shear force distributions as a Neural Shear Field, enabling explicit 3D spatial registration of contact locations. A multimodal feature fusion architecture is designed to support sim-to-real zero-shot transfer. Experiments in both simulation and on real robotic hardware demonstrate that our approach achieves high-accuracy pose estimation and external contact localization under partial observability and sensory noise. It significantly improves robustness and cross-domain generalization for contact-rich dexterous manipulation tasks.
📝 Abstract
Mastering dexterous, contact-rich object manipulation demands precise estimation of both in-hand object poses and external contact locations$unicode{x2013}$tasks particularly challenging due to partial and noisy observations. We present ViTaSCOPE: Visuo-Tactile Simultaneous Contact and Object Pose Estimation, an object-centric neural implicit representation that fuses vision and high-resolution tactile feedback. By representing objects as signed distance fields and distributed tactile feedback as neural shear fields, ViTaSCOPE accurately localizes objects and registers extrinsic contacts onto their 3D geometry as contact fields. Our method enables seamless reasoning over complementary visuo-tactile cues by leveraging simulation for scalable training and zero-shot transfers to the real-world by bridging the sim-to-real gap. We evaluate our method through comprehensive simulated and real-world experiments, demonstrating its capabilities in dexterous manipulation scenarios.