🤖 AI Summary
Existing vision-based tactile sensors face bottlenecks in sensitivity, micronewton-scale force resolution, and computational efficiency. This paper introduces a novel high-sensitivity vision-tactile sensing paradigm: micro/nano-fabricated trench structures modulate light propagation to convert mechanical deformation into high-contrast optical features—namely, intensity, line width, and intersection pattern changes—eliminating reliance on conventional marker-based tracking. A single-layer lightweight CNN enables real-time, decoupled estimation of contact location, displacement, and applied force. The co-designed “microstructure–optics–algorithm” framework achieves, for the first time, 10 mN force detection sensitivity, sub-millimeter spatial resolution, and displacement estimation with MAE < 0.05 mm. It significantly reduces computational demand while offering inherent electromagnetic interference immunity, making it well-suited for soft robotics and wearable systems.
📝 Abstract
Tactile sensing is critical in advanced interactive systems by emulating the human sense of touch to detect stimuli. Vision-based tactile sensors (VBTSs) are promising for their ability to provide rich information, robustness, adaptability, low cost, and multimodal capabilities. However, current technologies still have limitations in sensitivity, spatial resolution, and the high computational demands of deep learning-based image processing. This paper presents a comprehensive approach combining a novel sensor structure with micromachined structures and an efficient image processing method, and demonstrates that carefully engineered microstructures within the sensor hardware can significantly enhance sensitivity while reducing computational load. Unlike traditional designs with tracking markers, our sensor incorporates an interface surface with micromachined trenches, as an example of microstructures, which modulate light transmission and amplify the variation in response to applied force. By capturing variations in brightness, wire width, and cross pattern locations with a camera, the sensor accurately infers the contact location, the magnitude of displacement and applied force with a lightweight convolutional neural network (CNN). Theoretical and experimental results demonstrated that the microstructures significantly enhance sensitivity by amplifying the visual effects of shape distortion. The sensor system effectively detected forces below 10 mN, and achieved a millimetre-level single-point spatial resolution. Using a model with only one convolutional layer, a mean absolute error (MAE) below 0.05 mm have been achieved. Its soft sensor body ensures compatibility with soft robots and wearable electronics, while its immunity to electrical crosstalk and interference guarantees reliability in complex human-machine environments.