🤖 AI Summary
CLIP excels at global image-text alignment but underperforms on fine-grained region-phrase matching. To address this, we propose a multi-granularity text-guided contrastive learning framework that enables hierarchical alignment—from whole-image-to-full-caption down to local-region-to-phrase. Our core contribution is the β-Contextualized Contrastive Alignment Loss (β-CAL), which dynamically balances strict query-level matching and context-aware relaxed alignment, while supporting both soft and hard cross-entropy to mitigate semantic overlap across granularity levels. The method integrates cross-attention-driven dynamic image patch pooling and multi-granularity text encoding (captions, sentences, phrases), enabling end-to-end training without hard negative sampling. On Urban1K, our approach achieves 91.8% (T2I) and 92.3% (I2T) Recall@1; on FG-OVD (Hard), it attains 30.9%—setting a new state-of-the-art among methods that avoid hard negatives—and significantly advances dense vision-language alignment capability.
📝 Abstract
CLIP achieves strong zero-shot image-text retrieval by aligning global vision and text representations, yet it falls behind on fine-grained tasks even when fine-tuned on long, detailed captions. In this work, we propose $β$-CLIP, a multi-granular text-conditioned contrastive learning framework designed to achieve hierarchical alignment between multiple textual granularities-from full captions to sentences and phrases-and their corresponding visual regions. For each level of granularity, $β$-CLIP utilizes cross-attention to dynamically pool image patches, producing contextualized visual embeddings. To address the semantic overlap inherent in this hierarchy, we introduce the $β$-Contextualized Contrastive Alignment Loss ($β$-CAL). This objective parameterizes the trade-off between strict query-specific matching and relaxed intra-image contextualization, supporting both soft Cross-Entropy and hard Binary Cross-Entropy formulations. Through extensive experiments, we demonstrate that $β$-CLIP significantly improves dense alignment: achieving 91.8% T2I 92.3% I2T at R@1 on Urban1K and 30.9% on FG-OVD (Hard), setting state-of-the-art among methods trained without hard negatives. $β$-CLIP establishes a robust, adaptive baseline for dense vision-language correspondence. The code and models are released at https://github.com/fzohra/B-CLIP.