🤖 AI Summary
Current dexterous multi-fingered robotic hands generally lack thermal and dynamic torque sensing capabilities, limiting environmental understanding and safe physical interaction in complex manipulation tasks. This paper introduces MOTIF, a multimodal robotic hand built upon the LEAP platform, which—uniquely—integrates a thermal imager, IMU, high-density tactile array, depth camera, and RGB visual sensor. We further propose a multimodal data fusion algorithm and a temperature-guided grasp planning method. Our approach achieves synchronized five-dimensional perception—thermal, force, inertial, geometric, and visual—on a cost-effective platform. This enables temperature-enhanced 3D reconstruction, robust identification of visually similar yet physically distinct (e.g., varying mass or material) objects, and adaptive, safety-aware grasping of thermally sensitive regions. Experimental results demonstrate significant improvements in perceptual robustness and manipulation intelligence across challenging real-world scenarios.
📝 Abstract
Advancing dexterous manipulation with multi-fingered robotic hands requires rich sensory capabilities, while existing designs lack onboard thermal and torque sensing. In this work, we propose the MOTIF hand, a novel multimodal and versatile robotic hand that extends the LEAP hand by integrating: (i) dense tactile information across the fingers, (ii) a depth sensor, (iii) a thermal camera, (iv), IMU sensors, and (v) a visual sensor. The MOTIF hand is designed to be relatively low-cost (under 4000 USD) and easily reproducible. We validate our hand design through experiments that leverage its multimodal sensing for two representative tasks. First, we integrate thermal sensing into 3D reconstruction to guide temperature-aware, safe grasping. Second, we show how our hand can distinguish objects with identical appearance but different masses - a capability beyond methods that use vision only.