🤖 AI Summary
Real-time intent detection for service robots faces three key challenges: reliance on RGB-D sensors or GPU acceleration, severe data long-tail distribution, and poor cross-environment generalization. This paper proposes MINT-RVAE, a lightweight model that extracts 2D pose and facial affective features solely from monocular RGB video, enabling frame-level real-time inference on a Raspberry Pi 5 (CPU-only). Leveraging a compact RNN-VAE architecture, it generates temporally coherent multimodal intent sequences, effectively mitigating class imbalance. To our knowledge, this is the first work to systematically validate strong generalization across heterogeneous cameras and diverse physical environments. Offline evaluation achieves an AUROC of 0.95. When deployed on the real-world MIRA robot across different sensor modalities, MINT-RVAE attains 91% accuracy and 100% recall, with zero missed detections across 32 independent field tests.
📝 Abstract
Service robots in public spaces require real-time understanding of human behavioral intentions for natural interaction. We present a practical multimodal framework for frame-accurate human-robot interaction intent detection that fuses camera-invariant 2D skeletal pose and facial emotion features extracted from monocular RGB video. Unlike prior methods requiring RGB-D sensors or GPU acceleration, our approach resource-constrained embedded hardware (Raspberry Pi 5, CPU-only). To address the severe class imbalance in natural human-robot interaction datasets, we introduce a novel approach to synthesize temporally coherent pose-emotion-label sequences for data re-balancing called MINT-RVAE (Multimodal Recurrent Variational Autoencoder for Intent Sequence Generation). Comprehensive offline evaluations under cross-subject and cross-scene protocols demonstrate strong generalization performance, achieving frame- and sequence-level AUROC of 0.95. Crucially, we validate real-world generalization through cross-camera evaluation on the MIRA robot head, which employs a different onboard RGB sensor and operates in uncontrolled environments not represented in the training data. Despite this domain shift, the deployed system achieves 91% accuracy and 100% recall across 32 live interaction trials. The close correspondence between offline and deployed performance confirms the cross-sensor and cross-environment robustness of the proposed multimodal approach, highlighting its suitability for ubiquitous multimedia-enabled social robots.