🤖 AI Summary
Traditional unimodal text-based intent recognition suffers from limited contextual expressiveness, while human–computer interaction increasingly demands robust integration of heterogeneous signals. This paper systematically surveys deep learning–based multimodal intent recognition, focusing on synergistic modeling of textual, audio, visual, and physiological modalities. It traces the technical evolution from unimodal baselines to cross-modal fusion, emphasizing breakthrough applications of Transformer architectures in cross-modal alignment, feature fusion, and representation learning. We catalog 12 mainstream multimodal datasets, unify evaluation metrics, and identify representative application scenarios. A three-dimensional taxonomy—spanning modality combinations, fusion levels (early/late/hybrid), and learning paradigms (supervised/self-supervised/few-shot)—is proposed. Key challenges—including modality asynchrony, few-shot generalization, and model interpretability—are critically analyzed. Future directions include optimized cross-modal alignment, neuro-symbolic integration, and edge-efficient lightweight modeling, offering a structured reference for advancing multimodal intent understanding.
📝 Abstract
Intent recognition aims to identify users' underlying intentions, traditionally focusing on text in natural language processing. With growing demands for natural human-computer interaction, the field has evolved through deep learning and multimodal approaches, incorporating data from audio, vision, and physiological signals. Recently, the introduction of Transformer-based models has led to notable breakthroughs in this domain. This article surveys deep learning methods for intent recognition, covering the shift from unimodal to multimodal techniques, relevant datasets, methodologies, applications, and current challenges. It provides researchers with insights into the latest developments in multimodal intent recognition (MIR) and directions for future research.