🤖 AI Summary
This work addresses the challenges of interactive object retrieval in unlabeled multi-object images, where small target regions and high category diversity hinder performance. The authors formulate the task as an iterative binary classification problem, integrating an active learning loop with user feedback to progressively refine the model. Building upon a pretrained Vision Transformer, they systematically investigate strategies for fusing local and global features and conduct an in-depth analysis of key design choices—including target instance selection, annotation format, active sampling criteria, and feature representation. Extensive experiments across multiple multi-object datasets demonstrate the effectiveness of the proposed approach, revealing the trade-off between fine-grained local details and holistic global context, and offering practical guidelines for designing efficient interactive object retrieval systems.
📝 Abstract
Building on existing approaches, we revisit Human-in-the-Loop Object Retrieval, a task that consists of iteratively retrieving images containing objects of a class-of-interest, specified by a user-provided query. Starting from a large unlabeled image collection, the aim is to rapidly identify diverse instances of an object category relying solely on the initial query and the user's Relevance Feedback, with no prior labels. The retrieval process is formulated as a binary classification task, where the system continuously learns to distinguish between relevant and non-relevant images to the query, through iterative user interaction. This interaction is guided by an Active Learning loop: at each iteration, the system selects informative samples for user annotation, thereby refining the retrieval performance. This task is particularly challenging in multi-object datasets, where the object of interest may occupy only a small region of the image within a complex, cluttered scene. Unlike object-centered settings where global descriptors often suffice, multi-object images require more adapted, localized descriptors. In this work, we formulate and revisit the Human-in-the-Loop Object Retrieval task by leveraging pre-trained ViT representations, and addressing key design questions, including which object instances to consider in an image, what form the annotations should take, how Active Selection should be applied, and which representation strategies best capture the object's features. We compare several representation strategies across multi-object datasets highlighting trade-offs between capturing the global context and focusing on fine-grained local object details. Our results offer practical insights for the design of effective interactive retrieval pipelines based on Active Learning for object class retrieval.