Towards Identity-Aware Cross-Modal Retrieval: a Dataset and a Baseline

๐Ÿ“… 2024-12-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of recognizing long-tailed and fine-grained identities in cross-modal retrieval, this paper proposes Id-CLIP, an identity-aware cross-modal retrieval framework. Methodologically, we introduce COCO-PFSโ€”the first large-scale identity-enhanced datasetโ€”by fusing COCO scene images with deepfake faces from VGGFace2. Building upon CLIP, we design three core components: identity-aware feature alignment, context-aware fine-tuning, and cross-modal contrastive learning, enabling end-to-end optimization for individual identity recognition. Experiments demonstrate that Id-CLIP significantly improves image recall for long-tailed identities and achieves state-of-the-art performance on COCO-PFS. Both code and dataset are publicly released, establishing a reproducible benchmark for personalized audiovisual archive retrieval.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advancements in deep learning have significantly enhanced content-based retrieval methods, notably through models like CLIP that map images and texts into a shared embedding space. However, these methods often struggle with domain-specific entities and long-tail concepts absent from their training data, particularly in identifying specific individuals. In this paper, we explore the task of identity-aware cross-modal retrieval, which aims to retrieve images of persons in specific contexts based on natural language queries. This task is critical in various scenarios, such as for searching and browsing personalized video collections or large audio-visual archives maintained by national broadcasters. We introduce a novel dataset, COCO Person FaceSwap (COCO-PFS), derived from the widely used COCO dataset and enriched with deepfake-generated faces from VGGFace2. This dataset addresses the lack of large-scale datasets needed for training and evaluating models for this task. Our experiments assess the performance of different CLIP variations repurposed for this task, including our architecture, Identity-aware CLIP (Id-CLIP), which achieves competitive retrieval performance through targeted fine-tuning. Our contributions lay the groundwork for more robust cross-modal retrieval systems capable of recognizing long-tail identities and contextual nuances. Data and code are available at https://github.com/mesnico/IdCLIP.
Problem

Research questions and friction points this paper is trying to address.

Personal Identity Recognition
Natural Language Description
Rare or Domain-Specific Figures
Innovation

Methods, ideas, or system contributions that make the work stand out.

COCO-PFS dataset
Id-CLIP model
cross-modal search
๐Ÿ”Ž Similar Papers
No similar papers found.