🤖 AI Summary
To address the limitation of vision-language models like CLIP—whose fixed absolute positional encoding restricts text inputs to 77 tokens and hinders modeling of long descriptions—this paper proposes a scalable text encoder supporting arbitrary-length inputs. The core method introduces the first integration of relative positional encoding with knowledge distillation: a lightweight student text encoder is trained to mimic the semantic representation capability of the original CLIP text encoder (serving as teacher), followed by cross-modal alignment fine-tuning to enhance image–text matching. This design ensures length-agnosticism and broad compatibility. Extensive experiments demonstrate significant improvements over the original CLIP and multiple long-text baselines on both image–text retrieval and text-to-image generation tasks, effectively handling descriptions exceeding one hundred words. The implementation is publicly available.
📝 Abstract
We address the challenge of representing long captions in vision-language models, such as CLIP. By design these models are limited by fixed, absolute positional encodings, restricting inputs to a maximum of 77 tokens and hindering performance on tasks requiring longer descriptions. Although recent work has attempted to overcome this limit, their proposed approaches struggle to model token relationships over longer distances and simply extend to a fixed new token length. Instead, we propose a generalizable method, named TULIP, able to upgrade the token length to any length for CLIP-like models. We do so by improving the architecture with relative position encodings, followed by a training procedure that (i) distills the original CLIP text encoder into an encoder with relative position encodings and (ii) enhances the model for aligning longer captions with images. By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation. The code repository is available at https://github.com/ivonajdenkoska/tulip.