🤖 AI Summary
Low- or missing-modal quality in social media image–text pairs undermines model robustness for multimodal sentiment analysis.
Method: This paper proposes a unified recovery-and-fusion framework based on feature distribution modeling. It jointly addresses modality quality assessment, cross-modal feature completion, and quality-aware fusion within a single model: (i) implicitly estimating modality-specific feature distributions via feature queues; (ii) designing distribution-supervised cross-modal mapping; and (iii) employing modality corruption and stochastic dropout for robust training.
Contribution/Results: Evaluated on three public benchmarks under two realistic interference scenarios—modality corruption and modality dropout—the method consistently outperforms state-of-the-art approaches. It demonstrates superior effectiveness and generalizability under multimodal degradation, establishing a new paradigm for robust multimodal sentiment analysis with heterogeneous and unreliable inputs.
📝 Abstract
As posts on social media increase rapidly, analyzing the sentiments embedded in image-text pairs has become a popular research topic in recent years. Although existing works achieve impressive accomplishments in simultaneously harnessing image and text information, they lack the considerations of possible low-quality and missing modalities. In real-world applications, these issues might frequently occur, leading to urgent needs for models capable of predicting sentiment robustly. Therefore, we propose a Distribution-based feature Recovery and Fusion (DRF) method for robust multimodal sentiment analysis of image-text pairs. Specifically, we maintain a feature queue for each modality to approximate their feature distributions, through which we can simultaneously handle low-quality and missing modalities in a unified framework. For low-quality modalities, we reduce their contributions to the fusion by quantitatively estimating modality qualities based on the distributions. For missing modalities, we build inter-modal mapping relationships supervised by samples and distributions, thereby recovering the missing modalities from available ones. In experiments, two disruption strategies that corrupt and discard some modalities in samples are adopted to mimic the low-quality and missing modalities in various real-world scenarios. Through comprehensive experiments on three publicly available image-text datasets, we demonstrate the universal improvements of DRF compared to SOTA methods under both two strategies, validating its effectiveness in robust multimodal sentiment analysis.