SDDF: Specificity-Driven Dynamic Focusing for Open-Vocabulary Camouflaged Object Detection

πŸ“… 2026-03-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of open-vocabulary camouflaged object detection, where targets exhibit high visual similarity to their backgrounds, making it difficult to accurately identify objects of unseen categories. To tackle this problem, the authors introduce OVCOD-D, the first benchmark dataset annotated with fine-grained textual descriptions. They propose a specificity-guided dynamic focusing method that leverages multimodal large language models to generate discriminative sub-descriptions. By integrating region-level weak alignment, a dynamic focusing mechanism, and a principal component contrastive fusion strategy for sub-descriptions, the approach effectively suppresses textual noise and enhances the model’s ability to discriminate camouflaged objects against highly similar backgrounds. Evaluated on the open-set OVCOD-D benchmark, the method achieves 56.4 AP, significantly outperforming existing approaches.
πŸ“ Abstract
Open-vocabulary object detection (OVOD) aims to detect known and unknown objects in the open world by leveraging text prompts. Benefiting from the emergence of large-scale vision--language pre-trained models, OVOD has demonstrated strong zero-shot generalization capabilities. However, when dealing with camouflaged objects, the detector often fails to distinguish and localize objects because the visual features of the objects and the background are highly similar. To bridge this gap, we construct a benchmark named OVCOD-D by augmenting carefully selected camouflaged object images with fine-grained textual descriptions. Due to the limited scale of available camouflaged object datasets, we adopt detectors pre-trained on large-scale object detection datasets as our baseline methods, as they possess stronger zero-shot generalization ability. In the specificity-aware sub-descriptions generated by multimodal large models, there still exist confusing and overly decorative modifiers. To mitigate such interference, we design a sub-description principal component contrastive fusion strategy that reduces noisy textual components. Furthermore, to address the challenge that the visual features of camouflaged objects are highly similar to those of their surrounding environment, we propose a specificity-guided regional weak alignment and dynamic focusing method, which aims to strengthen the detector's ability to discriminate camouflaged objects from background. Under the open-set evaluation setting, the proposed method achieves an AP of 56.4 on the OVCOD-D benchmark.
Problem

Research questions and friction points this paper is trying to address.

open-vocabulary object detection
camouflaged object detection
visual similarity
zero-shot generalization
text prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

open-vocabulary object detection
camouflaged object detection
specificity-driven dynamic focusing
textual description denoising
vision-language pre-training
πŸ”Ž Similar Papers
No similar papers found.
J
Jiaming Liang
Shenzhen University
Y
Yifeng Zhan
Shenzhen University
C
Chunlin Liu
Shenzhen University
Weihua Zheng
Weihua Zheng
A*STAR
Multilingual LLMCultural LLM
B
Bingye Peng
Shenzhen University
Q
Qiwei Liang
Shenzhen University
B
Boyang Cai
Shenzhen University
X
Xiaochun Mai
Shenzhen University
Qiang Nie
Qiang Nie
Assistant Professor, Hong Kong University of Science and Technology, Guangzhou, China
roboticshuman-robot interactionartificial intelligencecomputer vision