🤖 AI Summary
In multimodal emotion and intent recognition, sensor failures or data corruption often cause missing modalities, yet existing methods suffer from excessive inter-modal coupling and generation distortion during feature reconstruction. To address this, we propose an Attention-driven Diffusion Model (ADM) that decouples modality-specific representation learning via independently trained modality encoders and guides the diffusion process using cross-modal attention to achieve high-fidelity, distribution-consistent imputation of missing modality features. The framework enables end-to-end joint optimization and cross-modal enhancement, simultaneously improving robustness under both complete and partial-observability scenarios. Evaluated on the IEMOCAP and MIntRec benchmarks, ADM outperforms current state-of-the-art methods, achieving average accuracy improvements of 3.2% for emotion recognition and 4.1% for intent recognition.
📝 Abstract
Multimodal emotion and intent recognition is essential for automated human-computer interaction, It aims to analyze users' speech, text, and visual information to predict their emotions or intent. One of the significant challenges is that missing modalities due to sensor malfunctions or incomplete data. Traditional methods that attempt to reconstruct missing information often suffer from over-coupling and imprecise generation processes, leading to suboptimal outcomes. To address these issues, we introduce an Attention-based Diffusion model for Missing Modalities feature Completion (ADMC). Our framework independently trains feature extraction networks for each modality, preserving their unique characteristics and avoiding over-coupling. The Attention-based Diffusion Network (ADN) generates missing modality features that closely align with authentic multimodal distribution, enhancing performance across all missing-modality scenarios. Moreover, ADN's cross-modal generation offers improved recognition even in full-modality contexts. Our approach achieves state-of-the-art results on the IEMOCAP and MIntRec benchmarks, demonstrating its effectiveness in both missing and complete modality scenarios.