🤖 AI Summary
Multimodal sarcasm detection faces two key challenges: (1) models often exploit spurious dataset shortcuts—such as idiosyncratic visual backgrounds or prosodic cues—leading to poor generalization; and (2) existing fusion strategies fail to effectively capture cross-modal sarcastic semantics. To address these, we propose the Multimodal Conditional Information Bottleneck (MCIB) model, which implements a conditional feature compression mechanism grounded in information bottleneck theory. MCIB preserves sarcasm-discriminative information while suppressing inter-modal redundancy and shortcut correlations. To rigorously evaluate robustness, we introduce MUStARD++$^R$, a de-biased benchmark dataset explicitly designed to eliminate shortcut biases. Experiments demonstrate that MCIB achieves state-of-the-art performance under shortcut-free evaluation, exhibits superior cross-domain generalization, and attains the highest accuracy on real-world test scenarios. These results validate the efficacy of conditional information bottleneck principles for complex affective understanding in multimodal settings.
📝 Abstract
Multimodal sarcasm detection is a complex task that requires distinguishing subtle complementary signals across modalities while filtering out irrelevant information. Many advanced methods rely on learning shortcuts from datasets rather than extracting intended sarcasm-related features. However, our experiments show that shortcut learning impairs the model's generalization in real-world scenarios. Furthermore, we reveal the weaknesses of current modality fusion strategies for multimodal sarcasm detection through systematic experiments, highlighting the necessity of focusing on effective modality fusion for complex emotion recognition. To address these challenges, we construct MUStARD++$^{R}$ by removing shortcut signals from MUStARD++. Then, a Multimodal Conditional Information Bottleneck (MCIB) model is introduced to enable efficient multimodal fusion for sarcasm detection. Experimental results show that the MCIB achieves the best performance without relying on shortcut learning.