π€ AI Summary
This work addresses weakly supervised camouflaged object detection (COD), tackling challenges posed by sparse point annotations, high similarity between objects and backgrounds, and severe occlusion. We propose a novel point-text collaborative three-stage framework (Segmentation β Screening β Training). Our method introduces point-guided candidate generation (PCG) and CLIP-driven quality candidate discrimination (QCD) to automatically construct and refine high-quality pseudo-masks under sparse point supervision. We establish two new benchmarks: P2C-COD (point-supervised) and T-COD (text-supervised)βthe first of their kind for COD. The framework integrates CLIP-based cross-modal alignment, self-supervised ViT feature extraction, point-guided mask refinement, and iterative pseudo-label training. Extensive experiments demonstrate significant improvements over existing weakly supervised COD methods across four mainstream benchmarks, with several metrics even surpassing state-of-the-art fully supervised approaches.
π Abstract
Weakly-Supervised Camouflaged Object Detection (WSCOD) has gained popularity for its promise to train models with weak labels to segment objects that visually blend into their surroundings. Recently, some methods using sparsely-annotated supervision shown promising results through scribbling in WSCOD, while point-text supervision remains underexplored. Hence, this paper introduces a novel holistically point-guided text framework for WSCOD by decomposing into three phases: segment, choose, train. Specifically, we propose Point-guided Candidate Generation (PCG), where the point's foreground serves as a correction for the text path to explicitly correct and rejuvenate the loss detection object during the mask generation process (SEGMENT). We also introduce a Qualified Candidate Discriminator (QCD) to choose the optimal mask from a given text prompt using CLIP (CHOOSE), and employ the chosen pseudo mask for training with a self-supervised Vision Transformer (TRAIN). Additionally, we developed a new point-supervised dataset (P2C-COD) and a text-supervised dataset (T-COD). Comprehensive experiments on four benchmark datasets demonstrate our method outperforms state-of-the-art methods by a large margin, and also outperforms some existing fully-supervised camouflaged object detection methods.