π€ AI Summary
This work addresses the insufficient cross-modal alignment in vision-language models when dealing with fine-grained, long-text descriptions. To mitigate this limitation, the authors propose a structure-aware alignment enhancement paradigm that introduces image edge maps as structural proxies to construct structure-centric textual representations. Within a contrastive learning framework, the method explicitly models multimodal structural cues and jointly optimizes visual-linguistic structural consistency through a multi-granularity structure alignment loss and mutual information maximization. Extensive experiments demonstrate that the proposed approach significantly outperforms existing methods on both general and domain-specific cross-modal retrieval benchmarks. The authors also release their pretrained models and source code to facilitate further research.
π Abstract
Edge-based representations are fundamental cues for visual understanding, a principle rooted in early vision research and still central today. We extend this principle to vision-language alignment, showing that isolating and aligning structural cues across modalities can greatly benefit fine-tuning on long, detail-rich captions, with a specific focus on improving cross-modal retrieval. We introduce StructXLIP, a fine-tuning alignment paradigm that extracts edge maps (e.g., Canny), treating them as proxies for the visual structure of an image, and filters the corresponding captions to emphasize structural cues, making them "structure-centric". Fine-tuning augments the standard alignment loss with three structure-centric losses: (i) aligning edge maps with structural text, (ii) matching local edge regions to textual chunks, and (iii) connecting edge maps to color images to prevent representation drift. From a theoretical standpoint, while standard CLIP maximizes the mutual information between visual and textual embeddings, StructXLIP additionally maximizes the mutual information between multimodal structural representations. This auxiliary optimization is intrinsically harder, guiding the model toward more robust and semantically stable minima, enhancing vision-language alignment. Beyond outperforming current competitors on cross-modal retrieval in both general and specialized domains, our method serves as a general boosting recipe that can be integrated into future approaches in a plug-and-play manner. Code and pretrained models are publicly available at: https://github.com/intelligolabs/StructXLIP.