๐ค AI Summary
This paper addresses the problem that multimodal language understanding (MLU) models often mistake spurious statistical correlations for causal features, leading to poor out-of-distribution (OOD) generalization. To tackle this, we propose the Causal Multimodal Information Bottleneck (CaMIB) framework. CaMIB innovatively integrates a parametric mask generator, instrumental variable constraints, and backdoor adjustment within the information bottleneck principle to explicitly decouple causal features from non-causal shortcuts. By jointly modeling visionโlanguage inputs while suppressing task-irrelevant noise, CaMIB enhances both OOD robustness and interpretability. Extensive experiments on multimodal sentiment analysis, humor detection, and sarcasm detection demonstrate consistent and significant improvements over existing state-of-the-art methods. These results validate the critical role of causal representation learning in improving generalization for multimodal understanding.
๐ Abstract
Human Multimodal Language Understanding (MLU) aims to infer human intentions by integrating related cues from heterogeneous modalities. Existing works predominantly follow a ``learning to attend" paradigm, which maximizes mutual information between data and labels to enhance predictive performance. However, such methods are vulnerable to unintended dataset biases, causing models to conflate statistical shortcuts with genuine causal features and resulting in degraded out-of-distribution (OOD) generalization. To alleviate this issue, we introduce a Causal Multimodal Information Bottleneck (CaMIB) model that leverages causal principles rather than traditional likelihood. Concretely, we first applies the information bottleneck to filter unimodal inputs, removing task-irrelevant noise. A parameterized mask generator then disentangles the fused multimodal representation into causal and shortcut subrepresentations. To ensure global consistency of causal features, we incorporate an instrumental variable constraint, and further adopt backdoor adjustment by randomly recombining causal and shortcut features to stabilize causal estimation. Extensive experiments on multimodal sentiment analysis, humor detection, and sarcasm detection, along with OOD test sets, demonstrate the effectiveness of CaMIB. Theoretical and empirical analyses further highlight its interpretability and soundness.