Breaking Language Barriers in Visual Language Models via Multilingual Textual Regularization

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses image-induced fidelity loss (IFL) in vision-language models (VLMs)—a phenomenon where models generate English responses even to non-English inputs, primarily due to scarce multilingual multimodal training data. We propose Continuous Multilingual Text Regularization (CMTR), a method that incorporates monolingual multilingual text data during vision-instruction fine-tuning. CMTR jointly optimizes language fidelity and visual understanding via multilingual text embedding alignment and cross-modal regularization, eliminating the typical trade-off between them. Experiments on multilingual visual question answering and image captioning demonstrate significant improvements in response fidelity across all languages, with no degradation in visual performance. Compared to baselines such as model ensembling, CMTR achieves superior accuracy and robustness. To our knowledge, this is the first systematic approach to resolving multilingual fidelity issues in VLMs without compromising visual capabilities.

Technology Category

Application Category

📝 Abstract
Rapid advancements in Visual Language Models (VLMs) have transformed multimodal understanding but are often constrained by generating English responses regardless of the input language. This phenomenon has been termed as Image-induced Fidelity Loss (IFL) and stems from limited multimodal multilingual training data. To address this, we propose a continuous multilingual integration strategy that injects text-only multilingual data during visual instruction tuning, preserving the language model's original multilingual capabilities. Extensive evaluations demonstrate that our approach significantly improves linguistic fidelity across languages without degradation in visual performance. We also explore model merging, which improves language fidelity but comes at the cost of visual performance. In contrast, our core method achieves robust multilingual alignment without trade-offs, offering a scalable and effective path to mitigating IFL for global VLM adoption.
Problem

Research questions and friction points this paper is trying to address.

Overcoming English-only responses in Visual Language Models
Addressing Image-induced Fidelity Loss from limited multilingual data
Improving multilingual alignment without sacrificing visual performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual textual regularization for VLMs
Continuous multilingual integration strategy
Robust multilingual alignment without trade-offs
🔎 Similar Papers
No similar papers found.