MLLMs are Deeply Affected by Modality Bias

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from severe modality bias—over-reliance on linguistic inputs while underutilizing visual and other modalities—undermining robustness and generalization. This work systematically identifies three root causes: (1) modality imbalance in training data, (2) architectural bias in backbone networks favoring text processing, and (3) objective-induced “short-circuiting” that bypasses cross-modal integration. We propose a cross-modal alignment optimization framework integrating diagnostic analysis, collaborative attribution experiments, and quantifiable evaluation metrics. Empirical results demonstrate that language dominance significantly impairs effective visual representation encoding. Our study establishes the first comprehensive modality bias attribution framework for MLLMs and provides a reproducible, balanced training methodology. By bridging theoretical analysis with practical intervention, this work lays foundational groundwork—both conceptual and methodological—for developing truly semantically unified multimodal foundation models.

Technology Category

Application Category

📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have shown promising results in integrating diverse modalities such as texts and images. MLLMs are heavily influenced by modality bias, often relying on language while under-utilizing other modalities like visual inputs. This position paper argues that MLLMs are deeply affected by modality bias. Firstly, we diagnose the current state of modality bias, highlighting its manifestations across various tasks. Secondly, we propose a systematic research road-map related to modality bias in MLLMs. Thirdly, we identify key factors of modality bias in MLLMs and offer actionable suggestions for future research to mitigate it. To substantiate these findings, we conduct experiments that demonstrate the influence of each factor: 1. Data Characteristics: Language data is compact and abstract, while visual data is redundant and complex, creating an inherent imbalance in learning dynamics. 2. Imbalanced Backbone Capabilities: The dominance of pretrained language models in MLLMs leads to overreliance on language and neglect of visual information. 3. Training Objectives: Current objectives often fail to promote balanced cross-modal alignment, resulting in shortcut learning biased toward language. These findings highlight the need for balanced training strategies and model architectures to better integrate multiple modalities in MLLMs. We call for interdisciplinary efforts to tackle these challenges and drive innovation in MLLM research. Our work provides a fresh perspective on modality bias in MLLMs and offers insights for developing more robust and generalizable multimodal systems-advancing progress toward Artificial General Intelligence.
Problem

Research questions and friction points this paper is trying to address.

MLLMs over-rely on language, neglecting visual inputs.
Modality bias stems from data and training imbalances.
Current MLLMs lack balanced cross-modal alignment strategies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diagnose modality bias in MLLMs
Propose research road-map for bias
Identify key factors affecting bias