🤖 AI Summary
This work addresses vision-based referring instance detection (InsDet) in the open-world setting—precisely localizing semantic-matching instances of a given reference object in previously unseen test images. Existing methods neglect the inherent distributional shift and exhibit limited generalization to novel categories and scenes. To bridge this gap, we propose IDOW: the first framework that explicitly formulates InsDet as an open-world problem. IDOW fine-tunes foundation models (FMs) on open-world data, integrating metric learning with novel data augmentation strategies—namely, distractor sampling and novel-view synthesis—to jointly optimize proposal detection and instance matching in a two-stage pipeline. Evaluated on two newly established benchmarks, IDOW achieves substantial improvements over state-of-the-art methods under both conventional and novel-instance detection settings, yielding over +10 AP points on average.
📝 Abstract
Instance detection (InsDet) aims to localize specific object instances within a novel scene imagery based on given visual references. Technically, it requires proposal detection to identify all possible object instances, followed by instance-level matching to pinpoint the ones of interest. Its open-world nature supports its wide-ranging applications from robotics to AR/VR, but also presents significant challenges: methods must generalize to unknown testing data distributions because (1) the testing scene imagery is unseen during training, and (2) there are domain gaps between visual references and detected proposals. Existing methods attempt to tackle these challenges by synthesizing diverse training examples or utilizing off-the-shelf foundation models (FMs). However, they only partially capitalize the available open-world information. In this paper, we approach InsDet from an Open-World perspective, introducing our method IDOW. We find that, while pretrained FMs yield high recall in instance detection, they are not specifically optimized for instance-level feature matching. To address this, we adapt pretrained FMs for improved instance-level matching using open-world data. Our approach incorporates metric learning along with novel data augmentations, which sample distractors as negative examples and synthesize novel-view instances to enrich the visual references. Extensive experiments demonstrate that our method significantly outperforms prior works, achieving>10 AP over previous results on two recently released challenging benchmark datasets in both conventional and novel instance detection settings.