ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
CLIP’s text encoder is constrained by a 77-token length limit and lacks multilingual support, hindering fine-grained semantic understanding and cross-lingual cross-modal alignment. To address this, we propose an LLM-based text embedder to replace CLIP’s original text encoder, integrated within a progressive alignment framework. This framework synergistically combines curriculum learning, knowledge distillation, and self-distillation regularization to preserve the pretrained visual encoder’s knowledge. Additionally, we introduce instance-level semantic alignment and structural alignment losses to enhance consistency between image and text representations. Crucially, our method operates without modifying the image encoder. Extensive experiments demonstrate substantial improvements in long-text and multilingual image–text retrieval across Flickr30K, MS-COCO Multilingual, and Long-Caption benchmarks, achieving new state-of-the-art performance. The approach exhibits strong generalization capability and deeper semantic comprehension while maintaining architectural compatibility with existing vision-language models.

Technology Category

Application Category

📝 Abstract
The original CLIP text encoder is limited by a maximum input length of 77 tokens, which hampers its ability to effectively process long texts and perform fine-grained semantic understanding. In addition, the CLIP text encoder lacks support for multilingual inputs. All these limitations significantly restrict its applicability across a broader range of tasks. Recent studies have attempted to replace the CLIP text encoder with an LLM-based embedder to enhance its ability in processing long texts, multilingual understanding, and fine-grained semantic comprehension. However, because the representation spaces of LLMs and the vision-language space of CLIP are pretrained independently without alignment priors, direct alignment using contrastive learning can disrupt the intrinsic vision-language alignment in the CLIP image encoder, leading to an underutilization of the knowledge acquired during pre-training. To address this challenge, we propose ProCLIP, a curriculum learning-based progressive vision-language alignment framework to effectively align the CLIP image encoder with an LLM-based embedder. Specifically, ProCLIP first distills knowledge from CLIP's text encoder into the LLM-based embedder to leverage CLIP's rich pretrained knowledge while establishing initial alignment between the LLM embedder and CLIP image encoder. Subsequently, ProCLIP further aligns the CLIP image encoder with the LLM-based embedder through image-text contrastive tuning, employing self-distillation regularization to avoid overfitting. To achieve a more effective alignment, instance semantic alignment loss and embedding structure alignment loss are employed during representation inheritance and contrastive tuning. The Code is available at https://github.com/VisionXLab/ProCLIP
Problem

Research questions and friction points this paper is trying to address.

CLIP text encoder handles limited token length
CLIP lacks multilingual text processing capability
Direct LLM-CLIP alignment disrupts vision-language knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive alignment framework using curriculum learning
Knowledge distillation from CLIP to LLM embedder
Self-distillation regularization prevents overfitting during tuning
🔎 Similar Papers
No similar papers found.