🤖 AI Summary
This work addresses the limitations of existing zero-shot camouflaged object segmentation methods, which often rely on multimodal large language models for object discovery but suffer from ambiguous localization leading to false positives and missed detections. To overcome this, the authors propose DSS, a training-free progressive segmentation framework that refines results through a three-stage pipeline: feature-coherent object proposal generation, SAM-based segmentation refinement, and semantic-driven mask selection. By leveraging visual features to produce diverse object candidates and employing a multimodal large language model for mask evaluation, DSS achieves state-of-the-art performance across multiple benchmarks, with particularly notable improvements in segmentation accuracy for multi-instance scenarios.
📝 Abstract
Current zero-shot Camouflaged Object Segmentation methods typically employ a two-stage pipeline (discover-then-segment): using MLLMs to obtain visual prompts, followed by SAM segmentation. However, relying solely on MLLMs for camouflaged object discovery often leads to inaccurate localization, false positives, and missed detections. To address these issues, we propose the \textbf{D}iscover-\textbf{S}egment-\textbf{S}elect (\textbf{DSS}) mechanism, a progressive framework designed to refine segmentation step by step. The proposed method contains a Feature-coherent Object Discovery (FOD) module that leverages visual features to generate diverse object proposals, a segmentation module that refines these proposals through SAM segmentation, and a Semantic-driven Mask Selection (SMS) module that employs MLLMs to evaluate and select the optimal segmentation mask from multiple candidates. Without requiring any training or supervision, DSS achieves state-of-the-art performance on multiple COS benchmarks, especially in multiple-instance scenes.