🤖 AI Summary
To address privacy leakage, illumination sensitivity, and poor occlusion robustness in existing camera-based gesture recognition systems, this paper proposes an end-to-end, non-contact human–robot interaction system leveraging millimeter-wave (mmWave) radar. The system uniquely integrates radar-based gesture recognition with robot control via a behavior tree framework, enabling real-time mapping of nine distinct gestures to robotic arm commands. It achieves high-accuracy recognition and low-latency response under challenging conditions—including varying illumination and partial occlusions. Key technical contributions include real-time radar signal processing, a lightweight deep classification model, and a behavior-tree-driven dynamic command scheduling mechanism. Experimental evaluation demonstrates an average gesture recognition accuracy of 98.2% and an end-to-end system latency below 120 ms, significantly improving interaction continuity, environmental adaptability, and overall system robustness.
📝 Abstract
As robots become increasingly prevalent in both homes and industrial settings, the demand for intuitive and efficient human-machine interaction continues to rise. Gesture recognition offers an intuitive control method that does not require physical contact with devices and can be implemented using various sensing technologies. Wireless solutions are particularly flexible and minimally invasive. While camera-based vision systems are commonly used, they often raise privacy concerns and can struggle in complex or poorly lit environments. In contrast, radar sensing preserves privacy, is robust to occlusions and lighting, and provides rich spatial data such as distance, relative velocity, and angle. We present a gesture-controlled robotic arm using mm-wave radar for reliable, contactless motion recognition. Nine gestures are recognized and mapped to real-time commands with precision. Case studies are conducted to demonstrate the system practicality, performance and reliability for gesture-based robotic manipulation. Unlike prior work that treats gesture recognition and robotic control separately, our system unifies both into a real-time pipeline for seamless, contactless human-robot interaction.