🤖 AI Summary
Multimodal large language models (MLLMs) frequently suffer from cross-modal hallucinations due to visual and linguistic prior biases. Existing decoding-stage methods merely model statistical correlations, neglecting the causal relationships between attention mechanisms and outputs. To address this, we introduce structural causal modeling into MLLMs for the first time, formalizing modality priors as confounders affecting both attention and output. We propose a causality-driven attention disentanglement framework that (i) corrects bias via backdoor adjustment, (ii) performs dual-level (visual and linguistic) attention intervention, and (iii) incorporates counterfactual reasoning to suppress hallucinations. Our method is plug-and-play and requires no fine-tuning. It achieves up to 65.3% improvement across six metrics on VLind-Bench and a 164-point gain on MME, significantly outperforming existing debiasing paradigms.
📝 Abstract
Multimodal Large Language Models (MLLMs) have emerged as a central focus in both industry and academia, but often suffer from biases introduced by visual and language priors, which can lead to multimodal hallucination. These biases arise from the visual encoder and the Large Language Model (LLM) backbone, affecting the attention mechanism responsible for aligning multimodal inputs. Existing decoding-based mitigation methods focus on statistical correlations and overlook the causal relationships between attention mechanisms and model output, limiting their effectiveness in addressing these biases. To tackle this issue, we propose a causal inference framework termed CausalMM that applies structural causal modeling to MLLMs, treating modality priors as a confounder between attention mechanisms and output. Specifically, by employing backdoor adjustment and counterfactual reasoning at both the visual and language attention levels, our method mitigates the negative effects of modality priors and enhances the alignment of MLLM's inputs and outputs, with a maximum score improvement of 65.3% on 6 VLind-Bench indicators and 164 points on MME Benchmark compared to conventional methods. Extensive experiments validate the effectiveness of our approach while being a plug-and-play solution. Our code is available at: https://github.com/The-Martyr/CausalMM