π€ AI Summary
Referring Camouflaged Object Detection (Ref-COD) aims to localize camouflaged objects using either a reference image or textual description. To address this, we propose a multi-stage progressive decoding framework. First, an overlapping-window cross-attention mechanism enables fine-grained matching between reference features and local regions of the main image. Second, a reference signal generation module adaptively fuses multimodal saliency priors. Third, robust representation is enhanced via multi-context feature aggregation and cross-stage encoder-feature fusion. Evaluated on the Ref-COD benchmark, our method establishes new state-of-the-art performance, achieving significant improvements in precision, recall, and cross-modal generalization. These results empirically validate the effectiveness of reference-guided localization coupled with synergistic localβglobal modeling.
π Abstract
Referring camouflaged object detection (Ref-COD) aims to identify hidden objects by incorporating reference information such as images and text descriptions. Previous research has transformed reference images with salient objects into one-dimensional prompts, yielding significant results. We explore ways to enhance performance through multi-context fusion of rich salient image features and camouflaged object features. Therefore, we propose RFMNet, which utilizes features from multiple encoding stages of the reference salient images and performs interactive fusion with the camouflage features at the corresponding encoding stages. Given that the features in salient object images contain abundant object-related detail information, performing feature fusion within local areas is more beneficial for detecting camouflaged objects. Therefore, we propose an Overlapped Windows Cross-attention mechanism to enable the model to focus more attention on the local information matching based on reference features. Besides, we propose the Referring Feature Aggregation (RFA) module to decode and segment the camouflaged objects progressively. Extensive experiments on the Ref-COD benchmark demonstrate that our method achieves state-of-the-art performance.