MBA: Multimodal Bidirectional Attack for Referring Expression Segmentation Models

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Referring expression segmentation (RES) models exhibit limited robustness, and existing adversarial attacks suffer from poor cross-text transferability. Method: We propose the first bidirectional adversarial attack method tailored for multimodal RES architectures, establishing a vision-language co-optimization framework. Our approach introduces a learnable textual embedding perturbation mechanism and jointly optimizes image-aligned and text-based adversarial perturbations via dual-path gradient-driven end-to-end training. Contribution/Results: The generated adversarial examples demonstrate strong transferability across diverse unseen referring expressions. Extensive experiments on multiple RES models and benchmark datasets show that our method achieves an average 27.3% improvement in cross-text attack success rate over state-of-the-art methods, significantly advancing multimodal model robustness evaluation.

Technology Category

Application Category

📝 Abstract
Referring Expression Segmentation (RES) enables precise object segmentation in images based on natural language descriptions, offering high flexibility and broad applicability in real-world vision tasks. Despite its impressive performance, the robustness of RES models against adversarial examples remains largely unexplored. While prior adversarial attack methods have explored adversarial robustness on conventional segmentation models, they perform poorly when directly applied to RES, failing to expose vulnerabilities in its multimodal structure. Moreover, in practical open-world scenarios, users typically issue multiple, diverse referring expressions to interact with the same image, highlighting the need for adversarial examples that generalize across varied textual inputs. To address these multimodal challenges, we propose a novel adversarial attack strategy termed extbf{Multimodal Bidirectional Attack}, tailored for RES models. Our method introduces learnable proxy textual embedding perturbation and jointly performs visual-aligned optimization on the image modality and textual-adversarial optimization on the textual modality during attack generation. This dual optimization framework encourages adversarial images to actively adapt to more challenging text embedding during optimization, thereby enhancing their cross-text transferability, which refers to the ability of adversarial examples to remain effective under a variety of unseen or semantically diverse textual inputs. Extensive experiments conducted on multiple RES models and benchmark datasets demonstrate the superior effectiveness of our method compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Assessing RES model robustness against adversarial examples
Enhancing adversarial attack transferability across diverse textual inputs
Addressing multimodal challenges in RES with bidirectional optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Bidirectional Attack for RES models
Learnable proxy textual embedding perturbation
Joint visual-textual adversarial optimization framework
🔎 Similar Papers
No similar papers found.
X
Xingbai Chen
National Pilot School of Software, Yunnan University, Kunming, 650504, Yunnan, China
T
Tingchao Fu
National Pilot School of Software, Yunnan University, Kunming, 650504, Yunnan, China
Renyang Liu
Renyang Liu
National University of Singapore
AI Security & Data PrivacyMachine UnlearningComputer Vision
W
Wei Zhou
National Pilot School of Software, Yunnan University, Kunming, 650504, Yunnan, China; School of Information Science and Engineering, Yunnan University, Kunming, 650504, Yunnan, China
C
Chao Yi
National Pilot School of Software, Yunnan University, Kunming, 650504, Yunnan, China