🤖 AI Summary
Existing vision-language models struggle to achieve fine-grained alignment between long texts and images and lack the capacity for hierarchical modeling that captures both global context and local details. To address this, this work proposes the CAFT framework, which—without requiring pixel- or region-level annotations—jointly learns cross-modal hierarchical alignments between image regions and text sentences through a coarse-to-fine visual encoder and a hierarchical text Transformer. A novel hierarchical alignment loss is introduced to preserve overall semantic consistency. By effectively coupling syntactic/semantic structures with visual organization, CAFT achieves state-of-the-art performance on six long-text retrieval benchmarks after training on 30 million image–text pairs, demonstrating strong scalability.
📝 Abstract
Large vision-language models such as CLIP struggle with long captions because they align images and texts as undifferentiated wholes. Fine-grained vision-language understanding requires hierarchical semantics capturing both global context and localized details across visual and textual domains. Yet linguistic hierarchies from syntax or semantics rarely match visual organization, and purely visual hierarchies tend to fragment scenes into appearance-driven parts without semantic focus. We propose CAFT (Cross-domain Alignment of Forests and Trees), a hierarchical image-text representation learning framework that aligns global and local semantics across images and long captions without pixel-level supervision. Coupling a fine-to-coarse visual encoder with a hierarchical text transformer, it uses a hierarchical alignment loss that matches whole images with whole captions while biasing region-sentence correspondences, so that coarse semantics are built from fine-grained evidence rather than from aggregation untethered to part-level grounding. Trained on 30M image-text pairs, CAFT achieves state-of-the-art performance on six long-text retrieval benchmarks and exhibits strong scaling behavior. Experiments show that hierarchical cross-domain alignment enables fine-grained, visually grounded image-text representations to emerge without explicit region-level supervision.