ConSeg: Contextual Backdoor Attack Against Semantic Segmentation

πŸ“… 2025-07-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Semantic segmentation models are vulnerable to backdoor attacks, yet existing methods rely heavily on localized triggers that are easily detected by defensive mechanisms. To address this, we propose ConSegβ€”the first context-aware backdoor attack specifically designed for semantic segmentation. ConSeg exploits real-world co-occurrence patterns between target and victim classes, implicitly reconstructing target-class contextual features within victim regions to embed triggers in a natural and stealthy manner. Crucially, it preserves pixel-value distributions and achieves enhanced stealthiness and cross-model transferability solely through contextual modeling. Experiments on benchmarks including Cityscapes demonstrate that ConSeg improves attack success rate (ASR) by 15.55% over state-of-the-art methods, while maintaining strong robustness against prominent defenses such as STRIP and Fine-Pruning. These results validate the critical role of contextual information in backdoor attacks and establish the effectiveness of this novel attack paradigm.

Technology Category

Application Category

πŸ“ Abstract
Despite significant advancements in computer vision, semantic segmentation models may be susceptible to backdoor attacks. These attacks, involving hidden triggers, aim to cause the models to misclassify instances of the victim class as the target class when triggers are present, posing serious threats to the reliability of these models. To further explore the field of backdoor attacks against semantic segmentation, in this paper, we propose a simple yet effective backdoor attack called Contextual Segmentation Backdoor Attack (ConSeg). ConSeg leverages the contextual information inherent in semantic segmentation models to enhance backdoor performance. Our method is motivated by an intriguing observation, i.e., when the target class is set as the `co-occurring' class of the victim class, the victim class can be more easily `mis-segmented'. Building upon this insight, ConSeg mimics the contextual information of the target class and rebuilds it in the victim region to establish the contextual relationship between the target class and the victim class, making the attack easier. Our experiments reveal that ConSeg achieves improvements in Attack Success Rate (ASR) with increases of 15.55%, compared to existing methods, while exhibiting resilience against state-of-the-art backdoor defenses.
Problem

Research questions and friction points this paper is trying to address.

Explores backdoor attacks on semantic segmentation models
Proposes ConSeg to enhance attack using contextual information
Improves Attack Success Rate by 15.55% over existing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages contextual information for backdoor attacks
Mimics target class context in victim region
Improves Attack Success Rate by 15.55%
πŸ”Ž Similar Papers
No similar papers found.
B
Bilal Hussain Abbasi
School of Information and Technology, Deakin University, Australia
Z
Zirui Gong
School of Information and Communication Technology, Griffith University, Australia
Yanjun Zhang
Yanjun Zhang
Lecturer, University of Technology Sydney
Security and PrivacyMachine Learning
S
Shang Gao
School of Information and Technology, Deakin University, Australia
Antonio Robles-Kelly
Antonio Robles-Kelly
The University of Adelaide
Computer VisionPattern RecognitionAIMachine Learning
L
Leo Zhang
School of Information and Communication Technology, Griffith University, Australia