🤖 AI Summary
Multimodal large language models (MLLMs) suffer from spurious visual-textual correlations, undermining their robustness and generalization. To address this, we propose the first debiasing framework grounded in causal mediation analysis: it employs counterfactual reasoning to disentangle core semantic content from noisy contextual cues and introduces a modality-aware, dynamic routing mixture-of-experts (MoE) architecture that enables collaborative, adaptive debiasing across vision and language modalities. Our method innovatively integrates causal inference with gated routing to actively suppress spurious signals during training. Evaluated on multimodal sarcasm detection and sentiment analysis, our framework consistently outperforms state-of-the-art debiasing methods and mainstream MLLMs, demonstrating both the efficacy of its causally motivated debiasing mechanism and its strong cross-task generalizability.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown substantial capabilities in integrating visual and textual information, yet frequently rely on spurious correlations, undermining their robustness and generalization in complex multimodal reasoning tasks. This paper addresses the critical challenge of superficial correlation bias in MLLMs through a novel causal mediation-based debiasing framework. Specially, we distinguishing core semantics from spurious textual and visual contexts via counterfactual examples to activate training-stage debiasing and employ a Mixture-of-Experts (MoE) architecture with dynamic routing to selectively engages modality-specific debiasing experts. Empirical evaluation on multimodal sarcasm detection and sentiment analysis tasks demonstrates that our framework significantly surpasses unimodal debiasing strategies and existing state-of-the-art models.