🤖 AI Summary
This work addresses the challenge of external contact estimation under conditions lacking prior knowledge and camera calibration. The authors propose UNIC, a novel framework that fuses visual, proprioceptive, and tactile modalities to directly encode multimodal observations in the camera coordinate frame and constructs a unified contact representation based on scene affordance graphs. UNIC establishes the first end-to-end, data-driven paradigm for multimodal contact estimation without requiring predefined contact types, fixed grasps, or calibration information. To enhance robustness, the method incorporates random masking during training. Experimental results demonstrate that UNIC achieves a mean Chamfer distance error of 9.6 mm on unseen contact locations and exhibits strong generalization capabilities to novel objects, missing modalities, and dynamic camera viewpoints.
📝 Abstract
Contact-rich manipulation requires reliable estimation of extrinsic contacts-the interactions between a grasped object and its environment which provide essential contextual information for planning, control, and policy learning. However, existing approaches often rely on restrictive assumptions, such as predefined contact types, fixed grasp configurations, or camera calibration, that hinder generalization to novel objects and deployment in unstructured environments. In this paper, we present UNIC, a unified multimodal framework for extrinsic contact estimation that operates without any prior knowledge or camera calibration. UNIC directly encodes visual observations in the camera frame and integrates them with proprioceptive and tactile modalities in a fully data-driven manner. It introduces a unified contact representation based on scene affordance maps that captures diverse contact formations and employs a multimodal fusion mechanism with random masking, enabling robust multimodal representation learning. Extensive experiments demonstrate that UNIC performs reliably. It achieves a 9.6 mm average Chamfer distance error on unseen contact locations, performs well on unseen objects, remains robust under missing modalities, and adapts to dynamic camera viewpoints. These results establish extrinsic contact estimation as a practical and versatile capability for contact-rich manipulation. The overview and hardware experiment videos are at https://youtu.be/xpMitkxN6Ls?si=7Vgj-aZ_P1wtnWZN