MuxGel: Simultaneous Dual-Modal Visuo-Tactile Sensing via Spatially Multiplexing and Deep Reconstruction

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation of existing visuo-tactile sensors, whose opaque coatings obstruct external visual perception prior to contact. To overcome this, the authors propose a spatially multiplexed coating design based on a checkerboard pattern, enabling a single camera to simultaneously capture unobstructed visual data and high-fidelity tactile signals. A U-Net architecture, combined with simulation-to-reality transfer learning, is employed to reconstruct high-resolution bimodal information. Notably, this approach achieves plug-and-play, synchronized visuo-tactile sensing within a standard form factor without requiring modifications to existing GelSight systems. Experimental results demonstrate high-accuracy bimodal reconstruction on previously unseen objects, significantly enhancing pre-contact alignment and contact interaction performance in robotic grasping tasks.

Technology Category

Application Category

📝 Abstract
High-fidelity visuo-tactile sensing is important for precise robotic manipulation. However, most vision-based tactile sensors face a fundamental trade-off: opaque coatings enable tactile sensing but block pre-contact vision. To address this, we propose MuxGel, a spatially multiplexed sensor that captures both external visual information and contact-induced tactile signals through a single camera. By using a checkerboard coating pattern, MuxGel interleaves tactile-sensitive regions with transparent windows for external vision. This design maintains standard form factors, allowing for plug-and-play integration into GelSight-style sensors by simply replacing the gel pad. To recover full-resolution vision and tactile signals from the multiplexed inputs, we develop a U-Net-based reconstruction framework. Leveraging a sim-to-real pipeline, our model effectively decouples and restores high-fidelity tactile and visual fields simultaneously. Experiments on unseen objects demonstrate the framework's generalization and accuracy. Furthermore, we demonstrate MuxGel's utility in grasping tasks, where dual-modality feedback facilitates both pre-contact alignment and post-contact interaction. Results show that MuxGel enhances the perceptual capabilities of existing vision-based tactile sensors while maintaining compatibility with their hardware stacks. Project webpage: https://zhixianhu.github.io/muxgel/.
Problem

Research questions and friction points this paper is trying to address.

visuo-tactile sensing
vision-based tactile sensors
pre-contact vision
tactile sensing
dual-modal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial multiplexing
visuo-tactile sensing
dual-modal reconstruction
U-Net
sim-to-real
🔎 Similar Papers
No similar papers found.
Zhixian Hu
Zhixian Hu
Purdue University
Zhengtong Xu
Zhengtong Xu
PhD candidate at Purdue University
Robot Learning
S
Sheeraz Athar
Edwardson School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
J
Juan Wachs
Edwardson School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
Yu She
Yu She
Assistant Professor, Purdue University
Robotic ManipulationMechanism DesignTactile SensingRobot Learning