🤖 AI Summary
Weakly supervised open-vocabulary object detection (WS-OVOD) faces two key challenges: static and monolithic semantic prototypes, and semantic misalignment between visual and textual representations. To address these, we propose a State- and Scene-Enhanced Prototype Learning (SSPL) framework. First, we introduce State-Enhanced Semantic Prototypes (SESP), leveraging large language models (LLMs) to generate state-aware object descriptions (e.g., pose, deformation) to capture intra-class visual diversity. Second, we design Scene-Enhanced Pseudo-Prototypes (SAPP), employing context-aware textual expansion and a soft alignment loss to bridge the semantic gap between pseudo-bounding boxes and text embeddings. SSPL jointly optimizes pseudo-box generation, contrastive learning, and vision-language alignment. Extensive experiments demonstrate significant performance gains across multiple benchmarks, particularly improving generalization and detection accuracy on rare categories and complex scenes.
📝 Abstract
Open-Vocabulary Object Detection (OVOD) aims to generalize object recognition to novel categories, while Weakly Supervised OVOD (WS-OVOD) extends this by combining box-level annotations with image-level labels. Despite recent progress, two critical challenges persist in this setting. First, existing semantic prototypes, even when enriched by LLMs, are static and limited, failing to capture the rich intra-class visual variations induced by different object states (e.g., a cat's pose). Second, the standard pseudo-box generation introduces a semantic mismatch between visual region proposals (which contain context) and object-centric text embeddings. To tackle these issues, we introduce two complementary prototype enhancement strategies. To capture intra-class variations in appearance and state, we propose the State-Enhanced Semantic Prototypes (SESP), which generates state-aware textual descriptions (e.g., "a sleeping cat") to capture diverse object appearances, yielding more discriminative prototypes. Building on this, we further introduce Scene-Augmented Pseudo Prototypes (SAPP) to address the semantic mismatch. SAPP incorporates contextual semantics (e.g., "cat lying on sofa") and utilizes a soft alignment mechanism to promote contextually consistent visual-textual representations. By integrating SESP and SAPP, our method effectively enhances both the richness of semantic prototypes and the visual-textual alignment, achieving notable improvements.