MagicSkin: Balancing Marker and Markerless Modes in Vision-Based Tactile Sensors with a Translucent Skin

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual-tactile sensors face a fundamental trade-off between marked configurations—enabling high-precision normal force and displacement measurement but obscuring surface geometry—and unmarked configurations—preserving surface texture yet exhibiting poor tangential displacement tracking. This work introduces MagicSkin, a novel tactile skin employing semi-transparent chromatic markers that incorporate controllable optical transmittance into marker design for the first time. Without additional hardware or computational overhead, MagicSkin simultaneously achieves high-fidelity perception of normal pressure, in-plane (tangential) displacement, and surface geometric details. Compatible with GelSight-style architectures, it relies solely on standard optical imaging and classical image processing. Experiments demonstrate substantial improvements over conventional approaches: 99.17% object classification accuracy, 93.51% texture recognition rate, 97% point retention in displacement tracking, and a 66% reduction in total force prediction error—validating the feasibility and superiority of multimodal tactile sensing.

Technology Category

Application Category

📝 Abstract
Vision-based tactile sensors (VBTS) face a fundamental trade-off in marker and markerless design on the tactile skin: opaque ink markers enable measurement of force and tangential displacement but completely occlude geometric features necessary for object and texture classification, while markerless skin preserves surface details but struggles in measuring tangential displacements effectively. Current practice to solve the above problem via UV lighting or virtual transfer using learning-based models introduces hardware complexity or computing burdens. This paper introduces MagicSkin, a novel tactile skin with translucent, tinted markers balancing the modes of marker and markerless for VBTS. It enables simultaneous tangential displacement tracking, force prediction, and surface detail preservation. This skin is easy to plug into GelSight-family sensors without requiring additional hardware or software tools. We comprehensively evaluate MagicSkin in downstream tasks. The translucent markers impressively enhance rather than degrade sensing performance compared with traditional markerless and inked marker design: it achieves best performance in object classification (99.17%), texture classification (93.51%), tangential displacement tracking (97% point retention) and force prediction (66% improvement in total force error). These experimental results demonstrate that translucent skin eliminates the traditional performance trade-off in marker or markerless modes, paving the way for multimodal tactile sensing essential in tactile robotics. See videos at this href{https://zhuochenn.github.io/MagicSkin_project/}{link}.
Problem

Research questions and friction points this paper is trying to address.

Balances marker and markerless modes in vision-based tactile sensors
Enables simultaneous displacement tracking and surface detail preservation
Eliminates trade-off without extra hardware or software complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Translucent tinted markers balance marker and markerless modes
Enables simultaneous displacement tracking and surface detail preservation
Plug-and-play design without extra hardware or software tools
🔎 Similar Papers
No similar papers found.
O
Oluwatimilehin Tijani
Robot Perception Lab, Centre for Robotics Research, Department of Engineering, King’s College London, London WC2R 2LS, United Kingdom
Z
Zhuo Chen
Robot Perception Lab, Centre for Robotics Research, Department of Engineering, King’s College London, London WC2R 2LS, United Kingdom
Jiankang Deng
Jiankang Deng
Imperial College London
Computer VisionMachine Learning
Shan Luo
Shan Luo
Reader (Associate Professor), King's College London
RoboticsRobot PerceptionTactile SensingComputer VisionMachine Learning