🤖 AI Summary
To address modality missingness in incomplete multimodal emotion recognition (IMER) caused by sensor failures or noisy inputs, this paper proposes RoHyDR, a Robust Hybrid Diffusion Recovery framework. RoHyDR innovatively integrates diffusion-based generative modeling with adversarial learning to jointly optimize representations across multiple levels—unimodal encoding, multimodal fusion, feature space, and semantic space—in a hierarchical, multi-stage manner. Specifically, it employs conditional diffusion generation for high-fidelity modality recovery, multimodal adversarial constraints to enforce cross-modal consistency, multi-stage optimization for progressive refinement, and semantic-aligned representation learning to enhance discriminability. Extensive experiments on two mainstream benchmarks demonstrate that RoHyDR consistently outperforms state-of-the-art methods under diverse missing patterns—including random, structured, and cross-modal missingness—achieving superior reconstruction quality and robust emotion classification accuracy with strong generalization capability.
📝 Abstract
Multimodal emotion recognition analyzes emotions by combining data from multiple sources. However, real-world noise or sensor failures often cause missing or corrupted data, creating the Incomplete Multimodal Emotion Recognition (IMER) challenge. In this paper, we propose Robust Hybrid Diffusion Recovery (RoHyDR), a novel framework that performs missing-modality recovery at unimodal, multimodal, feature, and semantic levels. For unimodal representation recovery of missing modalities, RoHyDR exploits a diffusion-based generator to generate distribution-consistent and semantically aligned representations from Gaussian noise, using available modalities as conditioning. For multimodal fusion recovery, we introduce adversarial learning to produce a realistic fused multimodal representation and recover missing semantic content. We further propose a multi-stage optimization strategy that enhances training stability and efficiency. In contrast to previous work, the hybrid diffusion and adversarial learning-based recovery mechanism in RoHyDR allows recovery of missing information in both unimodal representation and multimodal fusion, at both feature and semantic levels, effectively mitigating performance degradation caused by suboptimal optimization. Comprehensive experiments conducted on two widely used multimodal emotion recognition benchmarks demonstrate that our proposed method outperforms state-of-the-art IMER methods, achieving robust recognition performance under various missing-modality scenarios. Our code will be made publicly available upon acceptance.