π€ AI Summary
In cross-modal imageβtext retrieval, single-vector embeddings struggle to capture the diversity of semantic associations, while existing multi-embedding set-based methods suffer from supervision sparsity and set collapse. This paper proposes a set-based multi-vector representation framework. Its core contributions are threefold: (1) a maximum mutual assignment similarity optimization mechanism, implemented via Sinkhorn iterations to enforce one-to-one optimal matching between embedding sets; (2) a joint global discriminative loss and intra-set divergence loss to jointly mitigate supervision sparsity and set collapse; and (3) integration of contrastive learning with multi-granularity losses. Evaluated on MS-COCO and Flickr30K, the method achieves state-of-the-art performance without relying on external data or pre-trained models.
π Abstract
Cross-modal image-text retrieval is challenging because of the diverse possible associations between content from different modalities. Traditional methods learn a single-vector embedding to represent semantics of each sample, but struggle to capture nuanced and diverse relationships that can exist across modalities. Set-based approaches, which represent each sample with multiple embeddings, offer a promising alternative, as they can capture richer and more diverse relationships. In this paper, we show that, despite their promise, these set-based representations continue to face issues including sparse supervision and set collapse, which limits their effectiveness. To address these challenges, we propose Maximal Pair Assignment Similarity to optimize one-to-one matching between embedding sets which preserve semantic diversity within the set. We also introduce two loss functions to further enhance the representations: Global Discriminative Loss to enhance distinction among embeddings, and Intra-Set Divergence Loss to prevent collapse within each set. Our method achieves state-of-the-art performance on MS-COCO and Flickr30k without relying on external data.