🤖 AI Summary
Existing contrastive vision-language models (e.g., CLIP) treat text as flat token sequences, failing to capture semantic hierarchy and monotonicity—limiting cross-modal alignment performance for long or compositional descriptions. To address this, we propose HiDe, a hierarchical decomposition module, and MoLo, a monotonicity-aware contrastive loss. Without modifying encoder architectures, our approach enables batch-aware, multi-granularity semantic alignment and text-completeness-driven alignment strength modeling—the first of its kind. Built upon CLIP, HiDe implicitly extracts hierarchical semantics via in-batch PCA, while MoLo enforces monotonic alignment constraints. We further introduce a global–component joint alignment strategy to enhance fine-grained correspondence. Extensive experiments on multiple image–text retrieval benchmarks demonstrate significant improvements over strong baselines, especially under long-text and complex-description scenarios. Ablations confirm that explicitly modeling semantic hierarchy and monotonicity substantially enhances vision–language alignment.
📝 Abstract
Contrastive vision-language models like CLIP have achieved impressive results in image-text retrieval by aligning image and text representations in a shared embedding space. However, these models often treat text as flat sequences, limiting their ability to handle complex, compositional, and long-form descriptions. In particular, they fail to capture two essential properties of language: semantic hierarchy, which reflects the multi-level compositional structure of text, and semantic monotonicity, where richer descriptions should result in stronger alignment with visual content.To address these limitations, we propose HiMo-CLIP, a representation-level framework that enhances CLIP-style models without modifying the encoder architecture. HiMo-CLIP introduces two key components: a hierarchical decomposition (HiDe) module that extracts latent semantic components from long-form text via in-batch PCA, enabling flexible, batch-aware alignment across different semantic granularities, and a monotonicity-aware contrastive loss (MoLo) that jointly aligns global and component-level representations, encouraging the model to internalize semantic ordering and alignment strength as a function of textual completeness.These components work in concert to produce structured, cognitively-aligned cross-modal representations. Experiments on multiple image-text retrieval benchmarks show that HiMo-CLIP consistently outperforms strong baselines, particularly under long or compositional descriptions. The code is available at https://github.com/UnicomAI/HiMo-CLIP.