🤖 AI Summary
This work addresses the tendency of CLIP models to over-rely on the initial sentence of image–text pairs during training, which leads to insufficient attention to semantic content in the latter portions of long texts and consequently impairs fine-grained cross-modal alignment. To mitigate this first-sentence bias without introducing additional parameters or altering the model architecture, the authors propose DeBias-CLIP, which removes the leading summary sentence and combines sentence subsampling with text token padding to encourage uniform attention across the entire text. This approach yields a more balanced cross-modal attention distribution. Experimental results demonstrate that DeBias-CLIP achieves state-of-the-art performance on long-text retrieval benchmarks, improves short-text retrieval accuracy, and exhibits enhanced robustness to sentence-order perturbations.
📝 Abstract
CLIP models learn transferable multi-modal features via image-text contrastive learning on internet-scale data. They are widely used in zero-shot classification, multi-modal retrieval, text-to-image diffusion, and as image encoders in large vision-language models. However, CLIP's pretraining is dominated by images paired with short captions, biasing the model toward encoding simple descriptions of salient objects and leading to coarse alignment on complex scenes and dense descriptions. While recent work mitigates this by fine-tuning on small-scale long-caption datasets, we identify an important common bias: both human- and LLM-generated long captions typically begin with a one-sentence summary followed by a detailed description. We show that this acts as a shortcut during training, concentrating attention on the opening sentence and early tokens and weakening alignment over the rest of the caption. To resolve this, we introduce DeBias-CLIP, which removes the summary sentence during training and applies sentence sub-sampling and text token padding to distribute supervision across all token positions. DeBias-CLIP achieves state-of-the-art long-text retrieval, improves short-text retrieval, and is less sensitive to sentence order permutations. It is a drop-in replacement for Long-CLIP with no additional trainable parameters.