🤖 AI Summary
To address the insufficient naturalness and robustness of remote teleoperation for mobile manipulators in smart-home assistive scenarios for elderly and disabled users, this paper proposes a multimodal wearable-sensing–based remote collaborative control system. We innovatively design a lightweight forearm-worn architecture integrating MEMS capacitive microphones, IMUs, vibrotactile actuators, and pressure sensors to enable six-class gesture-force co-recognition. A CNN-LSTM temporal model combined with cross-modal (tactile/inertial/acoustic) synchronous fusion ensures real-time closed-loop motion control. Experimental results demonstrate offline and online gesture recognition accuracies of 88.33% and 83.33%, respectively; navigation-and-grasping success rate of 98% (with trajectory deviation of 3.6 cm); and end-to-end object transport success rate of 91.1%. The system significantly enhances the intuitiveness and reliability of human–robot interaction in assistive applications.
📝 Abstract
This paper proposes a wearable-controlled mobile manipulator system for intelligent smart home assistance, integrating MEMS capacitive microphones, IMU sensors, vibration motors, and pressure feedback to enhance human-robot interaction. The wearable device captures forearm muscle activity and converts it into real-time control signals for mobile manipulation. The wearable device achieves an offline classification accuracy of 88.33% across six distinct movement-force classes for hand gestures by using a CNN-LSTM model, while real-world experiments involving five participants yield a practical accuracy of 83.33% with an average system response time of 1.2 seconds. In Human-Robot synergy in navigation and grasping tasks, the robot achieved a 98% task success rate with an average trajectory deviation of only 3.6 cm. Finally, the wearable-controlled mobile manipulator system achieved a 93.3% gripping success rate, a transfer success of 95.6%, and a full-task success rate of 91.1% during object grasping and transfer tests, in which a total of 9 object-texture combinations were evaluated. These three experiments' results validate the effectiveness of MEMS-based wearable sensing combined with multi-sensor fusion for reliable and intuitive control of assistive robots in smart home scenarios.