Improving Text-based Person Search via Part-level Cross-modal Correspondence

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the substantial semantic gap between text and image modalities and the difficulty of fine-grained part discrimination in text-driven person retrieval, this paper proposes an unsupervised cross-modal alignment framework based on an encoder-decoder architecture. The core innovation is the Commonality-Aware Part Ranking Loss (CPSL), the first loss function to jointly model both shared and discriminative characteristics of body parts without requiring part-level annotations, enabling progressive semantic alignment from coarse- to fine-grained levels. Our method integrates cross-modal embedding alignment with self-supervised fine-grained representation learning, enhancing part-level discriminability without manual part supervision. Extensive experiments demonstrate state-of-the-art performance on three standard benchmarks—CUHK-PEDES, RSTPReID, and ICFG-PEDES—achieving significant improvements in cross-modal matching accuracy and robustness to fine-grained spatial localization.

Technology Category

Application Category

📝 Abstract
Text-based person search is the task of finding person images that are the most relevant to the natural language text description given as query. The main challenge of this task is a large gap between the target images and text queries, which makes it difficult to establish correspondence and distinguish subtle differences across people. To address this challenge, we introduce an efficient encoder-decoder model that extracts coarse-to-fine embedding vectors which are semantically aligned across the two modalities without supervision for the alignment. There is another challenge of learning to capture fine-grained information with only person IDs as supervision, where similar body parts of different individuals are considered different due to the lack of part-level supervision. To tackle this, we propose a novel ranking loss, dubbed commonality-based margin ranking loss, which quantifies the degree of commonality of each body part and reflects it during the learning of fine-grained body part details. As a consequence, it enables our method to achieve the best records on three public benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Visual Search
Facial Recognition
Text-to-Image Retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal Comparison
Ranking Loss Based on Commonality
Body Part Similarity Judgment
🔎 Similar Papers
No similar papers found.