🤖 AI Summary
Existing vision-based tactile sensors suffer from sparse electrode arrays, low spatial resolution, and ambiguous force-to-image mappings. This paper introduces MoiréTac—a dual-mode tactile sensor leveraging micro-grating superposition to generate moiré interference patterns; the moiré amplification effect transduces microscopic deformations into high-density, resolvable visual signals, establishing a clear physical force-to-image mapping. We innovatively integrate interpretable physical features—including intensity, phase gradient, orientation, and period—with deep spatial representations to enable explainable end-to-end regression of six-dimensional contact forces/torques, sub-millimeter contact localization, and robust object recognition. Experiments demonstrate an R² > 0.98 for force regression, geometrically tunable sensitivity (≥3× improvement), and sustained classification accuracy under strong moiré interference. MoiréTac successfully enables a robotic arm to perform screw-cap manipulation tasks.
📝 Abstract
Visuotactile sensors typically employ sparse marker arrays that limit spatial resolution and lack clear analytical force-to-image relationships. To solve this problem, we present extbf{MoiréTac}, a dual-mode sensor that generates dense interference patterns via overlapping micro-gratings within a transparent architecture. When two gratings overlap with misalignment, they create moiré patterns that amplify microscopic deformations. The design preserves optical clarity for vision tasks while producing continuous moiré fields for tactile sensing, enabling simultaneous 6-axis force/torque measurement, contact localization, and visual perception. We combine physics-based features (brightness, phase gradient, orientation, and period) from moiré patterns with deep spatial features. These are mapped to 6-axis force/torque measurements, enabling interpretable regression through end-to-end learning. Experimental results demonstrate three capabilities: force/torque measurement with R^2 > 0.98 across tested axes; sensitivity tuning through geometric parameters (threefold gain adjustment); and vision functionality for object classification despite moiré overlay. Finally, we integrate the sensor into a robotic arm for cap removal with coordinated force and torque control, validating its potential for dexterous manipulation.