🤖 AI Summary
To address the challenge of unified perception across heterogeneous vision-tactile sensors due to intrinsic data modality discrepancies, this paper proposes a static-dynamic joint modeling framework for cross-sensor unified representation—integrating pixel-level texture with frame-level pressure dynamics to enable semantic-level, sensor-agnostic feature learning. Key contributions include: (1) the first paradigm for multi-sensor unified representation; (2) TacQuad, the first large-scale, temporally aligned multimodal multi-sensor tactile dataset; and (3) a novel architecture incorporating masked modeling, multimodal alignment, and cross-sensor matching to jointly encode tactile images and videos. Experiments demonstrate significant improvements over state-of-the-art methods across multiple benchmarks and a real-world pouring task, achieving both high-accuracy static object recognition and robust dynamic motion understanding, while enabling zero-shot cross-sensor transfer.
📝 Abstract
Visuo-tactile sensors aim to emulate human tactile perception, enabling robots to precisely understand and manipulate objects. Over time, numerous meticulously designed visuo-tactile sensors have been integrated into robotic systems, aiding in completing various tasks. However, the distinct data characteristics of these low-standardized visuo-tactile sensors hinder the establishment of a powerful tactile perception system. We consider that the key to addressing this issue lies in learning unified multi-sensor representations, thereby integrating the sensors and promoting tactile knowledge transfer between them. To achieve unified representation of this nature, we introduce TacQuad, an aligned multi-modal multi-sensor tactile dataset from four different visuo-tactile sensors, which enables the explicit integration of various sensors. Recognizing that humans perceive the physical environment by acquiring diverse tactile information such as texture and pressure changes, we further propose to learn unified multi-sensor representations from both static and dynamic perspectives. By integrating tactile images and videos, we present AnyTouch, a unified static-dynamic multi-sensor representation learning framework with a multi-level structure, aimed at both enhancing comprehensive perceptual abilities and enabling effective cross-sensor transfer. This multi-level architecture captures pixel-level details from tactile data via masked modeling and enhances perception and transferability by learning semantic-level sensor-agnostic features through multi-modal alignment and cross-sensor matching. We provide a comprehensive analysis of multi-sensor transferability, and validate our method on various datasets and in the real-world pouring task. Experimental results show that our method outperforms existing methods, exhibits outstanding static and dynamic perception capabilities across various sensors.