🤖 AI Summary
Existing open-vocabulary object detection methods rely on labor-intensive fine-grained annotations and computationally expensive cross-modal alignment, limiting their efficiency and scalability. This work proposes HDINO-T, a two-stage training framework built upon the DINO architecture. In the first stage, it leverages noisy data to establish a one-to-many visual-text semantic alignment mechanism (O2M) and introduces a difficulty-weighted classification loss (DWCL). The second stage enhances language semantic sensitivity through a lightweight feature fusion module. Notably, HDINO-T requires no human-curated data or localization annotations, achieving 49.2 mAP on COCO using only 2.2M publicly available images—outperforming Grounding DINO-T and T-Rex2. After fine-tuning, HDINO-T and its large variant, HDINO-T/L, attain 56.4 and 59.2 mAP, respectively.
📝 Abstract
Despite the growing interest in open-vocabulary object detection in recent years, most existing methods rely heavily on manually curated fine-grained training datasets as well as resource-intensive layer-wise cross-modal feature extraction. In this paper, we propose HDINO, a concise yet efficient open-vocabulary object detector that eliminates the dependence on these components. Specifically, we propose a two-stage training strategy built upon the transformer-based DINO model. In the first stage, noisy samples are treated as additional positive object instances to construct a One-to-Many Semantic Alignment Mechanism(O2M) between the visual and textual modalities, thereby facilitating semantic alignment. A Difficulty Weighted Classification Loss (DWCL) is also designed based on initial detection difficulty to mine hard examples and further improve model performance. In the second stage, a lightweight feature fusion module is applied to the aligned representations to enhance sensitivity to linguistic semantics. Under the Swin Transformer-T setting, HDINO-T achieves \textbf{49.2} mAP on COCO using 2.2M training images from two publicly available detection datasets, without any manual data curation and the use of grounding data, surpassing Grounding DINO-T and T-Rex2 by \textbf{0.8} mAP and \textbf{2.8} mAP, respectively, which are trained on 5.4M and 6.5M images. After fine-tuning on COCO, HDINO-T and HDINO-L further achieve \textbf{56.4} mAP and \textbf{59.2} mAP, highlighting the effectiveness and scalability of our approach. Code and models are available at https://github.com/HaoZ416/HDINO.