🤖 AI Summary
Existing image-text contrastive models (e.g., CLIP, SigLIP) prioritize high-level semantic alignment at the expense of fine-grained visual fidelity, limiting their performance on vision-centric tasks such as counting and depth estimation; conversely, purely visual models lack robust language grounding for language-driven applications. To address this trade-off, we propose a unified multi-granularity pretraining framework that jointly optimizes generative data augmentation, cross-modal reconstruction regularization, and intra-modal (image-image and text-text) contrastive learning. Implemented within a scalable billion-parameter architecture, our method simultaneously enhances visual understanding precision and cross-modal semantic alignment. Experiments demonstrate new state-of-the-art zero-shot classification accuracy on ImageNet-1K; a twofold improvement in linear probe accuracy on RxRx1 over SigLIP; and more than a threefold gain in cross-modal evaluation scores on MMVP.
📝 Abstract
Despite the recent success of image-text contrastive models like CLIP and SigLIP, these models often struggle with vision-centric tasks that demand high-fidelity image understanding, such as counting, depth estimation, and fine-grained object recognition. These models, by performing language alignment, tend to prioritize high-level semantics over visual understanding, weakening their image understanding. On the other hand, vision-focused models are great at processing visual information but struggle to understand language, limiting their flexibility for language-driven tasks. In this work, we introduce TULIP, an open-source, drop-in replacement for existing CLIP-like models. Our method leverages generative data augmentation, enhanced image-image and text-text contrastive learning, and image/text reconstruction regularization to learn fine-grained visual features while preserving global semantic alignment. Our approach, scaling to over 1B parameters, outperforms existing state-of-the-art (SOTA) models across multiple benchmarks, establishing a new SOTA zero-shot performance on ImageNet-1K, delivering up to a $2 imes$ enhancement over SigLIP on RxRx1 in linear probing for few-shot classification, and improving vision-language models, achieving over $3 imes$ higher scores than SigLIP on MMVP. Our code/checkpoints are available at https://tulip-berkeley.github.io