🤖 AI Summary
Referring expression segmentation (RES) models exhibit limited robustness, and existing adversarial attacks suffer from poor cross-text transferability. Method: We propose the first bidirectional adversarial attack method tailored for multimodal RES architectures, establishing a vision-language co-optimization framework. Our approach introduces a learnable textual embedding perturbation mechanism and jointly optimizes image-aligned and text-based adversarial perturbations via dual-path gradient-driven end-to-end training. Contribution/Results: The generated adversarial examples demonstrate strong transferability across diverse unseen referring expressions. Extensive experiments on multiple RES models and benchmark datasets show that our method achieves an average 27.3% improvement in cross-text attack success rate over state-of-the-art methods, significantly advancing multimodal model robustness evaluation.
📝 Abstract
Referring Expression Segmentation (RES) enables precise object segmentation in images based on natural language descriptions, offering high flexibility and broad applicability in real-world vision tasks. Despite its impressive performance, the robustness of RES models against adversarial examples remains largely unexplored. While prior adversarial attack methods have explored adversarial robustness on conventional segmentation models, they perform poorly when directly applied to RES, failing to expose vulnerabilities in its multimodal structure. Moreover, in practical open-world scenarios, users typically issue multiple, diverse referring expressions to interact with the same image, highlighting the need for adversarial examples that generalize across varied textual inputs. To address these multimodal challenges, we propose a novel adversarial attack strategy termed extbf{Multimodal Bidirectional Attack}, tailored for RES models. Our method introduces learnable proxy textual embedding perturbation and jointly performs visual-aligned optimization on the image modality and textual-adversarial optimization on the textual modality during attack generation. This dual optimization framework encourages adversarial images to actively adapt to more challenging text embedding during optimization, thereby enhancing their cross-text transferability, which refers to the ability of adversarial examples to remain effective under a variety of unseen or semantically diverse textual inputs. Extensive experiments conducted on multiple RES models and benchmark datasets demonstrate the superior effectiveness of our method compared to existing methods.