🤖 AI Summary
This work systematically identifies, for the first time, a pervasive text-dominance problem across modalities—image, video, audio, time-series, and graph—in multimodal large language models (MLLMs). Addressing the limitation of prior studies confined to vision-language tasks, we propose two quantitative metrics: Modality Dominance Index (MDI) and Attention Efficiency Index (AEI), and diagnose three root causes—attention dilution, flawed fusion architecture, and task design bias. To mitigate text dominance, we introduce a token-compression-based multimodal attention rebalancing method. Evaluated on LLaVA-7B, our approach reduces MDI from 10.23 to 0.86, significantly improving utilization efficiency of non-textual modalities. This work establishes a theoretical framework and reproducible technical pathway for modality-fair MLLM modeling.
📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across a diverse range of multimodal tasks. However, these models suffer from a core problem known as text dominance: they depend heavily on text for their inference, while underutilizing other modalities. While prior work has acknowledged this phenomenon in vision-language tasks, often attributing it to data biases or model architectures. In this paper, we conduct the first systematic investigation of text dominance across diverse data modalities, including images, videos, audio, time-series, and graphs. To measure this imbalance, we propose two evaluation metrics: the Modality Dominance Index (MDI) and the Attention Efficiency Index (AEI). Our comprehensive analysis reveals that text dominance is both significant and pervasive across all tested modalities. Our in-depth analysis identifies three underlying causes: attention dilution from severe token redundancy in non-textual modalities, the influence of fusion architecture design, and task formulations that implicitly favor textual inputs. Furthermore, we propose a simple token compression method that effectively rebalances model attention. Applying this method to LLaVA-7B, for instance, drastically reduces its MDI from 10.23 to a well-balanced value of 0.86. Our analysis and methodological framework offer a foundation for the development of more equitable and comprehensive multimodal language models.