🤖 AI Summary
Open-vocabulary object detection (OVD) suffers from weak generalization to unseen categories, primarily due to coarse-grained alignment between detector features and the CLIP embedding space, hindering effective semantic knowledge transfer. To address this, we propose a three-level semantic distillation framework: (1) instance-level modeling of single-object visual relationships; (2) category-level, text-guided novel-class-aware classification; and (3) image-level multi-object contextual contrastive distillation—systematically transferring CLIP’s instance-, category-, and image-level generalizable semantics. This is the first framework enabling cross-granularity collaborative knowledge distillation without additional text annotations. On OV-COCO with a ResNet-50 backbone, our method achieves 46.4% AP on novel classes, significantly outperforming state-of-the-art methods. Ablation studies quantitatively validate the contribution of each component.
📝 Abstract
Open-vocabulary object detection (OVD) aims to detect objects beyond the training annotations, where detectors are usually aligned to a pre-trained vision-language model, eg, CLIP, to inherit its generalizable recognition ability so that detectors can recognize new or novel objects. However, previous works directly align the feature space with CLIP and fail to learn the semantic knowledge effectively. In this work, we propose a hierarchical semantic distillation framework named HD-OVD to construct a comprehensive distillation process, which exploits generalizable knowledge from the CLIP model in three aspects. In the first hierarchy of HD-OVD, the detector learns fine-grained instance-wise semantics from the CLIP image encoder by modeling relations among single objects in the visual space. Besides, we introduce text space novel-class-aware classification to help the detector assimilate the highly generalizable class-wise semantics from the CLIP text encoder, representing the second hierarchy. Lastly, abundant image-wise semantics containing multi-object and their contexts are also distilled by an image-wise contrastive distillation. Benefiting from the elaborated semantic distillation in triple hierarchies, our HD-OVD inherits generalizable recognition ability from CLIP in instance, class, and image levels. Thus, we boost the novel AP on the OV-COCO dataset to 46.4% with a ResNet50 backbone, which outperforms others by a clear margin. We also conduct extensive ablation studies to analyze how each component works.