π€ AI Summary
This work addresses the issue of excessive fragmentation in bounding boxes generated by foundation models for few-shot object detection, which leads to numerous localized false positives. The authors propose a training-free detection framework that integrates generic region proposals, SAM2 mask extraction, and DINOv2 features, and introduces a novel confidence re-weighting mechanism based on directed graph diffusion. By modeling object regions as a graph and propagating information through diffusion, the method effectively distinguishes complete objects from fragmented parts. Evaluated on Pascal-5^i, COCO-20^i, and CD-FSOD benchmarks, the approach significantly outperforms existing methods, achieving 31.6 AP under the 10-shot setting on CD-FSODβan improvement of 10.2 AP over the previous best training-free method.
π Abstract
In this paper, we present FSOD-VFM: Few-Shot Object Detectors with Vision Foundation Models, a framework that leverages vision foundation models to tackle the challenge of few-shot object detection. FSOD-VFM integrates three key components: a universal proposal network (UPN) for category-agnostic bounding box generation, SAM2 for accurate mask extraction, and DINOv2 features for efficient adaptation to new object categories. Despite the strong generalization capabilities of foundation models, the bounding boxes generated by UPN often suffer from overfragmentation, covering only partial object regions and leading to numerous small, false-positive proposals rather than accurate, complete object detections. To address this issue, we introduce a novel graph-based confidence reweighting method. In our approach, predicted bounding boxes are modeled as nodes in a directed graph, with graph diffusion operations applied to propagate confidence scores across the network. This reweighting process refines the scores of proposals, assigning higher confidence to whole objects and lower confidence to local, fragmented parts. This strategy improves detection granularity and effectively reduces the occurrence of false-positive bounding box proposals. Through extensive experiments on Pascal-5$^i$, COCO-20$^i$, and CD-FSOD datasets, we demonstrate that our method substantially outperforms existing approaches, achieving superior performance without requiring additional training. Notably, on the challenging CD-FSOD dataset, which spans multiple datasets and domains, our FSOD-VFM achieves 31.6 AP in the 10-shot setting, substantially outperforming previous training-free methods that reach only 21.4 AP. Code is available at: https://intellindust-ai-lab.github.io/projects/FSOD-VFM.