🤖 AI Summary
To address the urgent need for precise multimodal gesture understanding in medical sign language recognition, this paper proposes a novel multimodal framework integrating RGB video and radar Doppler maps. Methodologically, it introduces an attention-based dynamic feature fusion strategy that orchestrates four heterogeneous spatiotemporal neural networks to jointly model visual appearance, motion dynamics, and micro-motion spectral features, enabling cross-modal complementary learning. Its key innovation lies in the first incorporation of millimeter-wave radar micro-motion information into medical sign language recognition, coupled with attention-driven adaptive weighted fusion—significantly enhancing model robustness and fine-grained discriminative capability. Evaluated on a large-scale Italian Sign Language dataset, the framework achieves 99.44% accuracy, outperforming the state-of-the-art by 2.1%. This work establishes a new paradigm for high-accuracy, low-latency clinical sign language interaction.
📝 Abstract
Accurate recognition of sign language in healthcare communication poses a significant challenge, requiring frameworks that can accurately interpret complex multimodal gestures. To deal with this, we propose FusionEnsemble-Net, a novel attention-based ensemble of spatiotemporal networks that dynamically fuses visual and motion data to enhance recognition accuracy. The proposed approach processes RGB video and range Doppler map radar modalities synchronously through four different spatiotemporal networks. For each network, features from both modalities are continuously fused using an attention-based fusion module before being fed into an ensemble of classifiers. Finally, the outputs of these four different fused channels are combined in an ensemble classification head, thereby enhancing the model's robustness. Experiments demonstrate that FusionEnsemble-Net outperforms state-of-the-art approaches with a test accuracy of 99.44% on the large-scale MultiMeDaLIS dataset for Italian Sign Language. Our findings indicate that an ensemble of diverse spatiotemporal networks, unified by attention-based fusion, yields a robust and accurate framework for complex, multimodal isolated gesture recognition tasks. The source code is available at: https://github.com/rezwanh001/Multimodal-Isolated-Italian-Sign-Language-Recognition.