Mitigating Modal Imbalance in Multimodal Reasoning

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical failure mode in foundation models (FMs) during multimodal joint reasoning—specifically, reasoning breakdowns induced by cross-modal attention imbalance, which manifests as pronounced bias under vision-language conflict scenarios. To address this, we propose an explicit multimodal fusion training paradigm: leveraging cross-modal conflict analysis, attention visualization, and targeted data augmentation to construct informative multimodal training samples. We systematically evaluate our approach across multiple vision-language benchmarks. Experiments demonstrate a substantial increase in cross-modal conflict detection rate—from 3% to 52%—significant mitigation of attention distribution imbalance, and consistent downstream performance gains: +2.1% on average across VQA and image captioning tasks. To our knowledge, this is the first study to formally identify attention imbalance as a fundamental bottleneck in multimodal reasoning and to introduce a scalable, balance-aware training framework for foundation models.

Technology Category

Application Category

📝 Abstract
Foundation models (FMs) deployed in real-world tasks such as computer-use agents must integrate diverse modalities. How good are FMs at performing joint reasoning, simultaneously reasoning over multiple modalities, especially when the modalities interact and relate to each other to form cross-modal context? To better understand this problem, we study FMs on cross-modal conflicts: scenarios where conflicting evidence is presented across modalities. This allows us to examine whether FMs prioritize one modality over another or reason jointly to reconcile the conflict. Our experiments reveal that FMs can recognize conflicts in unimodal contexts, composed of a single modality, 90% of the time, but the ratio falls as low as 3% when evidence is split across modalities -- similar observations hold in cross-lingual contexts, composed of multiple languages. We trace this failure to cross-modal attention imbalance, showing that FMs exhibit extreme asymmetry in attention scores, disproportionately prioritizing certain modalities. We show that cross-modal attention imbalance does not go away by simply scaling up multimodal or multilingual datasets blindly, since they lack training examples that explicitly require cross-modal reasoning. We demonstrate that even a simple and scalable method of explicitly combining multiple modalities within each training instance significantly reduces attention imbalance. Reduced attention imbalance directly translates to improved downstream performance on several vision-language benchmarks. Our findings underscore the importance of systematically addressing cross-modal contexts to build reliable foundation models.
Problem

Research questions and friction points this paper is trying to address.

Foundation models struggle with multimodal reasoning conflicts across modalities
Models exhibit attention imbalance by disproportionately prioritizing certain modalities
Current training lacks explicit cross-modal reasoning examples causing performance drops
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicitly combining multiple modalities in training instances
Reducing cross-modal attention imbalance in foundation models
Improving downstream performance on vision-language benchmarks
🔎 Similar Papers
No similar papers found.