uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual vision-language models exhibit limited cross-modal retrieval performance for low-resource languages—such as Czech, Finnish, Croatian, Hungarian, and Romanian—due to the scarcity of high-quality image–text pairs in these languages. To address this, we propose a lightweight, data-efficient language expansion method: leveraging a frozen English vision–text encoder as a semantic anchor, we train only a 1.7M-parameter cross-lingual projection module, enabling alignment without paired multilingual image–text data for the first time. Our approach operates within a contrastive learning framework, jointly optimizing frozen multilingual text and image encoders. Extensive evaluation on multiple multilingual retrieval benchmarks demonstrates substantial improvements in cross-modal retrieval performance across all five target languages. The method proves effective, generalizable across diverse low-resource settings, and deployment-friendly due to its minimal parameter overhead and reliance on frozen pretrained components.

Technology Category

Application Category

📝 Abstract
Contrastive Language-Image Pre-training (CLIP) has demonstrated strong generalization across a wide range of visual tasks by leveraging large-scale English-image pairs. However, its extension to low-resource languages remains limited due to the scarcity of high-quality multilingual image-text data. Existing multilingual vision-language models exhibit consistently low retrieval performance in underrepresented languages including Czech, Finnish, Croatian, Hungarian, and Romanian on the Crossmodal-3600 (XM3600) benchmark. To address this, we propose a lightweight and data-efficient framework for multilingual vision-language alignment. Our approach requires no image-text pairs or text-text pairs and freezes both the pretrained image encoder and multilingual text encoder during training. Only a compact 1.7M-parameter projection module is trained, using a contrastive loss over English representations as semantic anchors. This minimal training setup enables robust multilingual alignment even for languages with limited supervision. Extensive evaluation across multiple multilingual retrieval benchmarks confirms the effectiveness of our method, showing significant gains in five underrepresented languages where existing models typically underperform. These findings highlight the effectiveness of our pivot-based, parameter-efficient alignment strategy for inclusive multimodal learning.
Problem

Research questions and friction points this paper is trying to address.

Extending vision-language models to low-resource languages without paired data
Addressing poor multilingual retrieval performance in underrepresented languages
Developing parameter-efficient alignment with minimal training requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient multilingual extension of vision-language models
Uses unpaired data without image-text or text-text pairs
Trains compact projection module with contrastive loss
🔎 Similar Papers
No similar papers found.
D
Dahyun Chung
KAIST
D
Donghyun Shin
Korea University
Y
Yujin Sung
Korea University
S
Seunggi Moon
Korea University
J
Jinwoo Jeon
Korea University
Byung-Jun Lee
Byung-Jun Lee
Korea University
Machine Learning