Unsupervised Collaborative Metric Learning with Mixed-Scale Groups for General Object Retrieval

📅 2024-03-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses open-vocabulary generic object retrieval, proposing MS-UGCML—an unsupervised collaborative metric learning framework—to tackle efficient multi-scale object matching and precise localization. Methodologically, it introduces a novel hybrid-scale grouping mechanism to model cross-scale semantic consistency; leverages SAM to extract spatial context; and integrates contrastive learning, multi-scale feature alignment, and collaborative prototype clustering for robust, annotation-free object-level embedding learning. Key contributions include: (1) the first open-vocabulary benchmark specifically designed for generic object retrieval; and (2) state-of-the-art performance across BelgaLogos, Visual Genome, LVIS, and a newly constructed test set—achieving up to 6.69% and 10.03% absolute gains in object-level and image-level mAP, respectively, significantly outperforming existing unsupervised approaches.

Technology Category

Application Category

📝 Abstract
The task of searching for visual objects in a large image dataset is difficult because it requires efficient matching and accurate localization of objects that can vary in size. Although the segment anything model (SAM) offers a potential solution for extracting object spatial context, learning embeddings for local objects remains a challenging problem. This paper presents a novel unsupervised deep metric learning approach, termed unsupervised collaborative metric learning with mixed-scale groups (MS-UGCML), devised to learn embeddings for objects of varying scales. Following this, a benchmark of challenges is assembled by utilizing COCO 2017 and VOC 2007 datasets to facilitate the training and evaluation of general object retrieval models. Finally, we conduct comprehensive ablation studies and discuss the complexities faced within the domain of general object retrieval. Our object retrieval evaluations span a range of datasets, including BelgaLogos, Visual Genome, LVIS, in addition to a challenging evaluation set that we have individually assembled for open-vocabulary evaluation. These comprehensive evaluations effectively highlight the robustness of our unsupervised MS-UGCML approach, with an object level and image level mAPs improvement of up to 6.69% and 10.03%, respectively. The code is publicly available at https://github.com/dengyuhai/MS-UGCML.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual question answering with object-level knowledge retrieval
Learning embeddings for diverse, long-tailed objects at multiple scales
Developing benchmarks for general object retrieval and OK-VQA evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised deep feature embedding for objects
Multi-scale group collaborative embedding learning
Object retrieval benchmark with diverse datasets
🔎 Similar Papers
No similar papers found.
Shichao Kan
Shichao Kan
Central South University
Large Vision Language ModelDeep Metric LearningImage RetrievalObject Retrieval
Y
Yuhai Deng
School of Automation, Central South University, 410083 Changsha, Hunan, China
Yixiong Liang
Yixiong Liang
School of Computer Science and Engineering, Central South University, 410083, Changsha, Hunan, China
L
Lihui Cen
School of Automation, Central South University, 410083 Changsha, Hunan, China
Zhe Qu
Zhe Qu
California Institute of Technology
Y
Yigang Cen
Institute of Information Science, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China, and also with the Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China
Zhihai He
Zhihai He
Southern University of Science and Technology
Deep learningcomputer visionmachine learningsmart cyber-physical systems