🤖 AI Summary
This work addresses the challenge of accurately and reliably recognizing human hand activity intentions in real time during human-robot collaboration. To this end, the authors propose a distributed multimodal perception architecture that fuses data from a wearable inertial measurement unit (IMU)-based data glove and vision-based tactile sensors to enable high-precision, real-time identification of dynamic hand activities during physical human-robot interaction. The system employs a modular design combined with a real-time sequential classification algorithm, achieving consistently high recognition accuracy across offline evaluation, static real-time testing, and realistic collaborative scenarios. Experimental results validate the effectiveness and practicality of the proposed approach, demonstrating its potential to serve as a robust perceptual foundation for dynamic human-robot collaboration.
📝 Abstract
Human activity recognition (HAR) is fundamental in human-robot collaboration (HRC), enabling robots to respond to and dynamically adapt to human intentions. This paper introduces a HAR system combining a modular data glove equipped with Inertial Measurement Units and a vision-based tactile sensor to capture hand activities in contact with a robot. We tested our activity recognition approach under different conditions, including offline classification of segmented sequences, real-time classification under static conditions, and a realistic HRC scenario. The experimental results show a high accuracy for all the tasks, suggesting that multiple collaborative settings could benefit from this multi-modal approach.