🤖 AI Summary
This work addresses the performance degradation of vision-based hand gesture recognition under occlusion, illumination variations, and complex backgrounds by proposing a log-likelihood-ratio-based multimodal late fusion approach that integrates inertial data from an Apple Watch with signals from a custom capacitive-sensing glove. The resulting system enables highly robust and interpretable handheld-free teleoperation. A synchronized dataset comprising IMU, capacitive, and RGB video streams was collected for 20 aviation marshalling gestures. The proposed method achieves accuracy comparable to state-of-the-art visual models while substantially reducing computational overhead, model size, and training time, making it well-suited for real-time, reliable control of drones and mobile robots in hazardous environments.
📝 Abstract
Human operators are still frequently exposed to hazardous environments such as disaster zones and industrial facilities, where intuitive and reliable teleoperation of mobile robots and Unmanned Aerial Vehicles (UAVs) is essential. In this context, hands-free teleoperation enhances operator mobility and situational awareness, thereby improving safety in hazardous environments. While vision-based gesture recognition has been explored as one method for hands-free teleoperation, its performance often deteriorates under occlusions, lighting variations, and cluttered backgrounds, limiting its applicability in real-world operations. To overcome these limitations, we propose a multimodal gesture recognition framework that integrates inertial data (accelerometer, gyroscope, and orientation) from Apple Watches on both wrists with capacitive sensing signals from custom gloves. We design a late fusion strategy based on the log-likelihood ratio (LLR), which not only enhances recognition performance but also provides interpretability by quantifying modality-specific contributions. To support this research, we introduce a new dataset of 20 distinct gestures inspired by aircraft marshalling signals, comprising synchronized RGB video, IMU, and capacitive sensor data. Experimental results demonstrate that our framework achieves performance comparable to a state-of-the-art vision-based baseline while significantly reducing computational cost, model size, and training time, making it well suited for real-time robot control. We therefore underscore the potential of sensor-based multimodal fusion as a robust and interpretable solution for gesture-driven mobile robot and drone teleoperation.