A Holistically Point-guided Text Framework for Weakly-Supervised Camouflaged Object Detection

πŸ“… 2025-01-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses weakly supervised camouflaged object detection (COD), tackling challenges posed by sparse point annotations, high similarity between objects and backgrounds, and severe occlusion. We propose a novel point-text collaborative three-stage framework (Segmentation β†’ Screening β†’ Training). Our method introduces point-guided candidate generation (PCG) and CLIP-driven quality candidate discrimination (QCD) to automatically construct and refine high-quality pseudo-masks under sparse point supervision. We establish two new benchmarks: P2C-COD (point-supervised) and T-COD (text-supervised)β€”the first of their kind for COD. The framework integrates CLIP-based cross-modal alignment, self-supervised ViT feature extraction, point-guided mask refinement, and iterative pseudo-label training. Extensive experiments demonstrate significant improvements over existing weakly supervised COD methods across four mainstream benchmarks, with several metrics even surpassing state-of-the-art fully supervised approaches.

Technology Category

Application Category

πŸ“ Abstract
Weakly-Supervised Camouflaged Object Detection (WSCOD) has gained popularity for its promise to train models with weak labels to segment objects that visually blend into their surroundings. Recently, some methods using sparsely-annotated supervision shown promising results through scribbling in WSCOD, while point-text supervision remains underexplored. Hence, this paper introduces a novel holistically point-guided text framework for WSCOD by decomposing into three phases: segment, choose, train. Specifically, we propose Point-guided Candidate Generation (PCG), where the point's foreground serves as a correction for the text path to explicitly correct and rejuvenate the loss detection object during the mask generation process (SEGMENT). We also introduce a Qualified Candidate Discriminator (QCD) to choose the optimal mask from a given text prompt using CLIP (CHOOSE), and employ the chosen pseudo mask for training with a self-supervised Vision Transformer (TRAIN). Additionally, we developed a new point-supervised dataset (P2C-COD) and a text-supervised dataset (T-COD). Comprehensive experiments on four benchmark datasets demonstrate our method outperforms state-of-the-art methods by a large margin, and also outperforms some existing fully-supervised camouflaged object detection methods.
Problem

Research questions and friction points this paper is trying to address.

Weakly Supervised Learning
Object Detection
Visual Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weakly Supervised Camouflaged Object Detection
Point-and-Text Guidance
Self-supervised Vision Transformer
πŸ”Ž Similar Papers
No similar papers found.
T
Tsui Qin Mok
Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433, China.
Shuyong Gao
Shuyong Gao
Fudan University
Human Visual AttentionGenerative ModelWeakly Supervised Learning
Haozhe Xing
Haozhe Xing
Unknown affiliation
M
Miaoyang He
Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433, China.
Y
Yan Wang
Academy for Engineering & Technology, and the Yiwu Research Institute of Fudan University, Chengbei Road, Yiwu City, Zhejiang, 322000, China.
W
Wenqiang Zhang
Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433, China.; Academy for Engineering & Technology, and the Yiwu Research Institute of Fudan University, Chengbei Road, Yiwu City, Zhejiang, 322000, China.