CamoSAM2: Motion-Appearance Induced Auto-Refining Prompts for Video Camouflaged Object Detection

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In video camouflaged object detection (VCOD), camouflaged objects exhibit high visual similarity to their backgrounds, rendering SAM2 unreliable for prompt generation. To address this, we propose a motion-appearance joint-driven automatic prompt induction and iterative optimization framework. Our key contributions are: (1) the Motion-Appearance Prompt Inducer (MAPI), the first method to jointly leverage optical flow-based motion modeling and appearance feature contrast for robust initial prompt generation; (2) the Adaptive Multi-Prompt Refinement (AMPR) strategy, comprising camouflage-aware assessment, key-frame selection, and multi-prompt construction; and (3) a lightweight SAM2-adapted architecture enabling temporally consistent, multi-frame prompt co-optimization. Evaluated on two standard benchmarks, our method achieves new state-of-the-art mIoU scores—improving by 8.0% and 10.1%, respectively—while attaining the fastest inference speed among existing VCOD models.

Technology Category

Application Category

📝 Abstract
The Segment Anything Model 2 (SAM2), a prompt-guided video foundation model, has remarkably performed in video object segmentation, drawing significant attention in the community. Due to the high similarity between camouflaged objects and their surroundings, which makes them difficult to distinguish even by the human eye, the application of SAM2 for automated segmentation in real-world scenarios faces challenges in camouflage perception and reliable prompts generation. To address these issues, we propose CamoSAM2, a motion-appearance prompt inducer (MAPI) and refinement framework to automatically generate and refine prompts for SAM2, enabling high-quality automatic detection and segmentation in VCOD task. Initially, we introduce a prompt inducer that simultaneously integrates motion and appearance cues to detect camouflaged objects, delivering more accurate initial predictions than existing methods. Subsequently, we propose a video-based adaptive multi-prompts refinement (AMPR) strategy tailored for SAM2, aimed at mitigating prompt error in initial coarse masks and further producing good prompts. Specifically, we introduce a novel three-step process to generate reliable prompts by camouflaged object determination, pivotal prompting frame selection, and multi-prompts formation. Extensive experiments conducted on two benchmark datasets demonstrate that our proposed model, CamoSAM2, significantly outperforms existing state-of-the-art methods, achieving increases of 8.0% and 10.1% in mIoU metric. Additionally, our method achieves the fastest inference speed compared to current VCOD models.
Problem

Research questions and friction points this paper is trying to address.

Automating prompt generation for camouflaged object detection in videos
Improving segmentation accuracy of SAM2 in camouflaged scenarios
Reducing prompt errors in initial coarse masks for VCOD
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-appearance prompt inducer for SAM2
Adaptive multi-prompts refinement strategy
Three-step reliable prompt generation process
🔎 Similar Papers
No similar papers found.