๐ค AI Summary
Zero-shot video camouflaged object segmentation suffers from performance limitations due to appearance similarity between camouflaged objects and their backgrounds.
Method: This paper proposes an optical-flow-driven multimodal collaborative frameworkโthe first zero-shot approach to surpass supervised methods on standard benchmarks. It overcomes the limitations of appearance-only modeling by incorporating motion cues from optical flow, and synergistically integrates the semantic understanding capability of the vision-language model CLIP with the strong generalization ability of SAM 2 for mask generation, implementing a multi-stage cascaded inference pipeline.
Contribution/Results: On MoCA-Mask, the method achieves a weighted Fฮฒ score of 0.628โ113% higher than the best prior zero-shot method and 32% higher than the current state-of-the-art supervised method. On MoCA-Filter, it attains a success rate of 0.697. This work establishes a novel paradigm for zero-shot video camouflaged object segmentation.
๐ Abstract
Camouflaged object segmentation presents unique challenges compared to traditional segmentation tasks, primarily due to the high similarity in patterns and colors between camouflaged objects and their backgrounds. Effective solutions to this problem have significant implications in critical areas such as pest control, defect detection, and lesion segmentation in medical imaging. Prior research has predominantly emphasized supervised or unsupervised pre-training methods, leaving zero-shot approaches significantly underdeveloped. Existing zero-shot techniques commonly utilize the Segment Anything Model (SAM) in automatic mode or rely on vision-language models to generate cues for segmentation; however, their performances remain unsatisfactory, likely due to the similarity of the camouflaged object and the background. Optical flow, commonly utilized for detecting moving objects, has demonstrated effectiveness even with camouflaged entities. Our method integrates optical flow, a vision-language model, and SAM 2 into a sequential pipeline. Evaluated on the MoCA-Mask dataset, our approach achieves outstanding performance improvements, significantly outperforming existing zero-shot methods by raising the F-measure ($F_eta^w$) from 0.296 to 0.628. Remarkably, our approach also surpasses supervised methods, increasing the F-measure from 0.476 to 0.628. Additionally, evaluation on the MoCA-Filter dataset demonstrates an increase in the success rate from 0.628 to 0.697 when compared with FlowSAM, a supervised transfer method. A thorough ablation study further validates the individual contributions of each component. More details can be found on https://github.com/weathon/vcos.