🤖 AI Summary
This paper identifies a previously overlooked issue in multimodal large language models (MLLMs): *language prior conflict*—a misalignment between the inherent linguistic priors of the base LLM and the language distribution observed in multimodal training data, leading to biased vision-language alignment. To address this, we propose *Decoupled Proxy Alignment (DPA)*: a lightweight proxy language model is introduced to explicitly decouple and suppress dominant linguistic priors; additionally, a vision-relevance-aware dynamic loss weighting mechanism is designed to amplify gradient signals for critical tokens. DPA requires no modification to the backbone architecture and is compatible with diverse MLLM pretraining paradigms. Experiments across multiple datasets, model scales, and architectures demonstrate that DPA significantly mitigates language prior conflict, yielding consistent improvements in cross-modal alignment accuracy and generalization performance.
📝 Abstract
Multimodal large language models (MLLMs) have gained significant attention due to their impressive ability to integrate vision and language modalities. Recent advancements in MLLMs have primarily focused on improving performance through high-quality datasets, novel architectures, and optimized training strategies. However, in this paper, we identify a previously overlooked issue, language prior conflict, a mismatch between the inherent language priors of large language models (LLMs) and the language priors in training datasets. This conflict leads to suboptimal vision-language alignment, as MLLMs are prone to adapting to the language style of training samples. To address this issue, we propose a novel training method called Decoupled Proxy Alignment (DPA). DPA introduces two key innovations: (1) the use of a proxy LLM during pretraining to decouple the vision-language alignment process from language prior interference, and (2) dynamic loss adjustment based on visual relevance to strengthen optimization signals for visually relevant tokens. Extensive experiments demonstrate that DPA significantly mitigates the language prior conflict, achieving superior alignment performance across diverse datasets, model families, and scales. Our method not only improves the effectiveness of MLLM training but also shows exceptional generalization capabilities, making it a robust approach for vision-language alignment. Our code is available at https://github.com/fnlp-vision/DPA.