MIGA: Mutual Information-Guided Attack on Denoising Models for Semantic Manipulation

๐Ÿ“… 2025-03-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep learning denoising models suppress noise while preserving semantics, yet exhibit latent vulnerability to semantic inconsistency: existing adversarial attacks degrade visual fidelity but fail to achieve stealthy, targeted semantic manipulation. This paper introduces the first mutual information (MI)-minimization-based adversarial attack, treating MI as a principled metric of semantic similarity to generate visually clean yet systematically semantically distorted outputs. Our method comprises three components: (1) MI-gradient-guided perturbation generation, (2) a quantitative semantic fidelity evaluation framework, and (3) a robustness validation protocol across diverse denoising models and datasets. Evaluated on four state-of-the-art denoising models and five benchmark datasets, our attack reduces downstream task accuracy by 32.7% on average. The proposed semantic distortion metric quantitatively characterizes attack efficacy, while generated perturbations remain highly imperceptible and evade detection.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning-based denoising models have been widely employed in vision tasks, functioning as filters to eliminate noise while retaining crucial semantic information. Additionally, they play a vital role in defending against adversarial perturbations that threaten downstream tasks. However, these models can be intrinsically susceptible to adversarial attacks due to their dependence on specific noise assumptions. Existing attacks on denoising models mainly aim at deteriorating visual clarity while neglecting semantic manipulation, rendering them either easily detectable or limited in effectiveness. In this paper, we propose Mutual Information-Guided Attack (MIGA), the first method designed to directly attack deep denoising models by strategically disrupting their ability to preserve semantic content via adversarial perturbations. By minimizing the mutual information between the original and denoised images, a measure of semantic similarity. MIGA forces the denoiser to produce perceptually clean yet semantically altered outputs. While these images appear visually plausible, they encode systematically distorted semantics, revealing a fundamental vulnerability in denoising models. These distortions persist in denoised outputs and can be quantitatively assessed through downstream task performance. We propose new evaluation metrics and systematically assess MIGA on four denoising models across five datasets, demonstrating its consistent effectiveness in disrupting semantic fidelity. Our findings suggest that denoising models are not always robust and can introduce security risks in real-world applications. Code is available in the Supplementary Material.
Problem

Research questions and friction points this paper is trying to address.

Attacks deep denoising models to disrupt semantic content preservation.
Minimizes mutual information to create visually clean but semantically altered outputs.
Reveals vulnerabilities in denoising models affecting downstream task performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual Information-Guided Attack (MIGA) disrupts denoising models
Minimizes mutual information to alter semantic content
Produces visually clean but semantically distorted outputs
๐Ÿ”Ž Similar Papers
No similar papers found.
Guanghao Li
Guanghao Li
Fudan University
Graphics
M
Mingzhi Chen
Southern University of Science and Technology
H
Hao Yu
SIGS, Tsinghua University
Shuting Dong
Shuting Dong
Tsinghua University
Computer Vision๏ผŒTime Series Prediction
Wenhao Jiang
Wenhao Jiang
GML, Tencent, PolyU
Computer VisionMachine LearningFoundation Models
M
Ming Tang
Southern University of Science and Technology
C
Chun Yuan
SIGS, Tsinghua University