The Double-Edged Sword of Data-Driven Super-Resolution: Adversarial Super-Resolution Models

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes AdvSR, a novel framework that embeds adversarial behavior directly into the weights of super-resolution models during training, enabling targeted misclassification by downstream detectors—such as YOLOv11—without requiring input modifications or trigger dependencies at inference time. While preserving high-fidelity image reconstruction, AdvSR realizes the first model-level adversarial attack in super-resolution, departing from conventional paradigms reliant on input perturbations or backdoor mechanisms. Experiments across mainstream architectures—including SRCNN, EDSR, and SwinIR—demonstrate that AdvSR achieves high attack success rates with negligible degradation in reconstruction quality, thereby exposing a previously underexplored security vulnerability in the model supply chain.

Technology Category

Application Category

📝 Abstract
Data-driven super-resolution (SR) methods are often integrated into imaging pipelines as preprocessing steps to improve downstream tasks such as classification and detection. However, these SR models introduce a previously unexplored attack surface into imaging pipelines. In this paper, we present AdvSR, a framework demonstrating that adversarial behavior can be embedded directly into SR model weights during training, requiring no access to inputs at inference time. Unlike prior attacks that perturb inputs or rely on backdoor triggers, AdvSR operates entirely at the model level. By jointly optimizing for reconstruction quality and targeted adversarial outcomes, AdvSR produces models that appear benign under standard image quality metrics while inducing downstream misclassification. We evaluate AdvSR on three SR architectures (SRCNN, EDSR, SwinIR) paired with a YOLOv11 classifier and demonstrate that AdvSR models can achieve high attack success rates with minimal quality degradation. These findings highlight a new model-level threat for imaging pipelines, with implications for how practitioners source and validate models in safety-critical applications.
Problem

Research questions and friction points this paper is trying to address.

super-resolution
adversarial attacks
model-level threat
imaging pipelines
downstream misclassification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Super-Resolution
Model-Level Attack
Data-Driven SR
Downstream Misclassification
Weight Embedding
🔎 Similar Papers
No similar papers found.
H
Haley Duba-Sullivan
Cyber Resilience and Intelligence Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
Steven R. Young
Steven R. Young
Oak Ridge National Laboratory
Deep LearningHigh Performance ComputingComputer VisionText AnalysisClustering
E
Emma J. Reid
Cyber Resilience and Intelligence Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA