Erosion Attack for Adversarial Training to Enhance Semantic Segmentation Robustness

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness of existing semantic segmentation models under adversarial attacks, a limitation exacerbated by conventional adversarial training methods that overlook internal contextual semantic relationships within samples. To overcome this, the authors propose EroSeg-AT, a novel framework that introduces an erosion-based perturbation propagation mechanism grounded in pixel sensitivity and semantic consistency. Specifically, sensitive regions are first identified based on pixel-wise confidence scores; perturbations are then progressively propagated from low-confidence to high-confidence regions to disrupt semantic coherence and generate more potent adversarial examples. Experimental results demonstrate that this approach significantly enhances the effectiveness of adversarial attacks and substantially improves the robustness of semantic segmentation models when integrated into adversarial training.

Technology Category

Application Category

📝 Abstract
Existing segmentation models exhibit significant vulnerability to adversarial attacks.To improve robustness, adversarial training incorporates adversarial examples into model training. However, existing attack methods consider only global semantic information and ignore contextual semantic relationships within the samples, limiting the effectiveness of adversarial training. To address this issue, we propose EroSeg-AT, a vulnerability-aware adversarial training framework that leverages EroSeg to generate adversarial examples. EroSeg first selects sensitive pixels based on pixel-level confidence and then progressively propagates perturbations to higher-confidence pixels, effectively disrupting the semantic consistency of the samples. Experimental results show that, compared to existing methods, our approach significantly improves attack effectiveness and enhances model robustness under adversarial training.
Problem

Research questions and friction points this paper is trying to address.

adversarial training
semantic segmentation
adversarial attack
semantic consistency
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial training
semantic segmentation
erosion attack
semantic consistency
pixel-level perturbation
🔎 Similar Papers
No similar papers found.
Y
Yufei Song
Huazhong University of Science and Technology
Ziqi Zhou
Ziqi Zhou
Huazhong University of Science and Technology (HUST)
Trustworthy AI
M
Menghao Deng
National University of Singapore
Y
Yifan Hu
Huazhong University of Science and Technology
Shengshan Hu
Shengshan Hu
School of CSE, Huazhong University of Science and Technology (HUST)
AI SecurityEmbodied AIAutonomous Driving
Minghui Li
Minghui Li
Huazhong University of Science and Technology
AI Security
L
Leo Yu Zhang
Griffith University