Phantom-Insight: Adaptive Multi-cue Fusion for Video Camouflaged Object Detection with Multimodal LLM

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video Camouflaged Object Detection (VCOD) faces two key challenges: ambiguous object boundaries and poor foreground-background separability—frozen SAM struggles with precise edge delineation, while multimodal large language models (MLLMs) often confuse foreground and background. To address these, we propose a spatio-temporal–semantic collaborative framework featuring foreground-background decoupled learning and a dynamic visual token scoring mechanism, enabling adaptive fusion of multimodal cues to enhance information density. Additionally, we design a dynamic prompting network to guide fine-tuning of SAM, improving its capacity to capture subtle textural details. Our method achieves state-of-the-art performance on MoCA-Mask (Fβ ↑4.2%) and demonstrates strong generalization on CAD2016, significantly improving robustness in detecting unseen camouflaged objects.

Technology Category

Application Category

📝 Abstract
Video camouflaged object detection (VCOD) is challenging due to dynamic environments. Existing methods face two main issues: (1) SAM-based methods struggle to separate camouflaged object edges due to model freezing, and (2) MLLM-based methods suffer from poor object separability as large language models merge foreground and background. To address these issues, we propose a novel VCOD method based on SAM and MLLM, called Phantom-Insight. To enhance the separability of object edge details, we represent video sequences with temporal and spatial clues and perform feature fusion via LLM to increase information density. Next, multiple cues are generated through the dynamic foreground visual token scoring module and the prompt network to adaptively guide and fine-tune the SAM model, enabling it to adapt to subtle textures. To enhance the separability of objects and background, we propose a decoupled foreground-background learning strategy. By generating foreground and background cues separately and performing decoupled training, the visual token can effectively integrate foreground and background information independently, enabling SAM to more accurately segment camouflaged objects in the video. Experiments on the MoCA-Mask dataset show that Phantom-Insight achieves state-of-the-art performance across various metrics. Additionally, its ability to detect unseen camouflaged objects on the CAD2016 dataset highlights its strong generalization ability.
Problem

Research questions and friction points this paper is trying to address.

Detecting camouflaged objects in dynamic video environments
Improving edge separability in SAM-based object detection
Enhancing foreground-background separation in MLLM-based methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive SAM fine-tuning with multi-cue guidance
Decoupled foreground-background learning strategy
Multimodal LLM fusion for enhanced feature integration
H
Hua Zhang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
C
Changjiang Luo
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Ruoyu Chen
Ruoyu Chen
Institute of Information Engineering, Chinese Academy of Sciences.
Explainable AITrustworthy AIFoundation Model