🤖 AI Summary
Existing robots lack low-cost, easily manufacturable, and highly customizable general-purpose tactile sensors for unstructured environments, resulting in inadequate force perception and reliance on fragmented or blind solutions. Method: We propose a modular magnetic tactile sensor based on parametric microstructure arrays, enabling concurrent geometric and mechanical response tuning, arbitrary convex 3D-printed encapsulation, and task-adaptive sensitivity configuration. We introduce the novel “sliced-unit microstructure” sensing paradigm and open-source CAD-to-eFlesh—a tool for automatic conversion from CAD models to soft tactile skins. Additionally, we integrate deep learning–based slip detection with vision-tactile fusion reinforcement learning control. Results: Experiments demonstrate a contact localization RMSE of 0.5 mm, normal/tangential force prediction RMSEs of 0.27 N and 0.12 N, slip detection accuracy of 95%, and vision-tactile manipulation success rate of 91%—a 40% improvement over vision-only baselines.
📝 Abstract
If human experience is any guide, operating effectively in unstructured environments -- like homes and offices -- requires robots to sense the forces during physical interaction. Yet, the lack of a versatile, accessible, and easily customizable tactile sensor has led to fragmented, sensor-specific solutions in robotic manipulation -- and in many cases, to force-unaware, sensorless approaches. With eFlesh, we bridge this gap by introducing a magnetic tactile sensor that is low-cost, easy to fabricate, and highly customizable. Building an eFlesh sensor requires only four components: a hobbyist 3D printer, off-the-shelf magnets (<$5), a CAD model of the desired shape, and a magnetometer circuit board. The sensor is constructed from tiled, parameterized microstructures, which allow for tuning the sensor's geometry and its mechanical response. We provide an open-source design tool that converts convex OBJ/STL files into 3D-printable STLs for fabrication. This modular design framework enables users to create application-specific sensors, and to adjust sensitivity depending on the task. Our sensor characterization experiments demonstrate the capabilities of eFlesh: contact localization RMSE of 0.5 mm, and force prediction RMSE of 0.27 N for normal force and 0.12 N for shear force. We also present a learned slip detection model that generalizes to unseen objects with 95% accuracy, and visuotactile control policies that improve manipulation performance by 40% over vision-only baselines -- achieving 91% average success rate for four precise tasks that require sub-mm accuracy for successful completion. All design files, code and the CAD-to-eFlesh STL conversion tool are open-sourced and available on https://e-flesh.com.