State and Scene Enhanced Prototypes for Weakly Supervised Open-Vocabulary Object Detection

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised open-vocabulary object detection (WS-OVOD) faces two key challenges: static and monolithic semantic prototypes, and semantic misalignment between visual and textual representations. To address these, we propose a State- and Scene-Enhanced Prototype Learning (SSPL) framework. First, we introduce State-Enhanced Semantic Prototypes (SESP), leveraging large language models (LLMs) to generate state-aware object descriptions (e.g., pose, deformation) to capture intra-class visual diversity. Second, we design Scene-Enhanced Pseudo-Prototypes (SAPP), employing context-aware textual expansion and a soft alignment loss to bridge the semantic gap between pseudo-bounding boxes and text embeddings. SSPL jointly optimizes pseudo-box generation, contrastive learning, and vision-language alignment. Extensive experiments demonstrate significant performance gains across multiple benchmarks, particularly improving generalization and detection accuracy on rare categories and complex scenes.

Technology Category

Application Category

📝 Abstract
Open-Vocabulary Object Detection (OVOD) aims to generalize object recognition to novel categories, while Weakly Supervised OVOD (WS-OVOD) extends this by combining box-level annotations with image-level labels. Despite recent progress, two critical challenges persist in this setting. First, existing semantic prototypes, even when enriched by LLMs, are static and limited, failing to capture the rich intra-class visual variations induced by different object states (e.g., a cat's pose). Second, the standard pseudo-box generation introduces a semantic mismatch between visual region proposals (which contain context) and object-centric text embeddings. To tackle these issues, we introduce two complementary prototype enhancement strategies. To capture intra-class variations in appearance and state, we propose the State-Enhanced Semantic Prototypes (SESP), which generates state-aware textual descriptions (e.g., "a sleeping cat") to capture diverse object appearances, yielding more discriminative prototypes. Building on this, we further introduce Scene-Augmented Pseudo Prototypes (SAPP) to address the semantic mismatch. SAPP incorporates contextual semantics (e.g., "cat lying on sofa") and utilizes a soft alignment mechanism to promote contextually consistent visual-textual representations. By integrating SESP and SAPP, our method effectively enhances both the richness of semantic prototypes and the visual-textual alignment, achieving notable improvements.
Problem

Research questions and friction points this paper is trying to address.

Captures intra-class visual variations in object states
Addresses semantic mismatch between visual regions and text embeddings
Enhances semantic prototypes and visual-textual alignment consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

State-Enhanced Semantic Prototypes capture intra-class visual variations
Scene-Augmented Pseudo Prototypes incorporate contextual semantics
Soft alignment mechanism promotes visual-textual representation consistency
🔎 Similar Papers
No similar papers found.
J
Jiaying Zhou
National Institute of Health Data Science, Peking University, Beijing, China
Qingchao Chen
Qingchao Chen
Assistant Professor, Peking University
Transfer LearningMedical Data AnalysisMulti-modal Human SensingRadar Systems