BadSR: Stealthy Label Backdoor Attacks on Image Super-Resolution

πŸ“… 2025-05-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing backdoor poisoning attacks against super-resolution (SR) models lack stealthiness in the high-resolution (HR) output domain, rendering poisoned HR images easily detectable via visual inspection or statistical analysis. To address this, we propose BadSRβ€”the first SR backdoor attack achieving dual-domain stealthiness in both pixel and feature spaces. Our method introduces: (1) an HR-image-level backdoor stealth mechanism; (2) joint adversarial optimization of triggers coupled with a gradient-guided genetic algorithm for poisoned sample selection; and (3) feature-space approximation constraints and adversarial training to further enhance concealment. Experiments across diverse SR architectures (e.g., EDSR, RCAN) and benchmark datasets (e.g., DIV2K, Set5) demonstrate that BadSR maintains high attack success rates, significantly disrupts downstream vision tasks, and produces poisoned HR images indistinguishable to human observers and resistant to state-of-the-art backdoor detectors.

Technology Category

Application Category

πŸ“ Abstract
With the widespread application of super-resolution (SR) in various fields, researchers have begun to investigate its security. Previous studies have demonstrated that SR models can also be subjected to backdoor attacks through data poisoning, affecting downstream tasks. A backdoor SR model generates an attacker-predefined target image when given a triggered image while producing a normal high-resolution (HR) output for clean images. However, prior backdoor attacks on SR models have primarily focused on the stealthiness of poisoned low-resolution (LR) images while ignoring the stealthiness of poisoned HR images, making it easy for users to detect anomalous data. To address this problem, we propose BadSR, which improves the stealthiness of poisoned HR images. The key idea of BadSR is to approximate the clean HR image and the pre-defined target image in the feature space while ensuring that modifications to the clean HR image remain within a constrained range. The poisoned HR images generated by BadSR can be integrated with existing triggers. To further improve the effectiveness of BadSR, we design an adversarially optimized trigger and a backdoor gradient-driven poisoned sample selection method based on a genetic algorithm. The experimental results show that BadSR achieves a high attack success rate in various models and data sets, significantly affecting downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving stealthiness of poisoned HR images in SR models
Approximating clean and target HR images in feature space
Designing optimized triggers and poisoned sample selection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximates clean and target images in feature space
Uses adversarially optimized trigger for effectiveness
Employs genetic algorithm for poisoned sample selection
πŸ”Ž Similar Papers
No similar papers found.
J
Ji Guo
Laboratory Of Intelligent Collaborative Computing, University of Electronic Science and Technology of China, China
X
Xiaolei Wen
School of Computer Science and Technology, Xinjiang University, China
Wenbo Jiang
Wenbo Jiang
University of Electronic Science and Technology of China
AI securityBackdoor attack
C
Cheng Huang
School of Computer Science, Fudan University, China
Jinjin Li
Jinjin Li
Tsinghua university
frictionsuperlubricitynanotribologyinterface
H
Hongwei Li
School of Computer Science and Engineering, University of Electronic Science and Technology of China, China