SEGA: A Transferable Signed Ensemble Gaussian Black-Box Attack against No-Reference Image Quality Assessment Models

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor transferability of adversarial attacks against no-reference image quality assessment (NR-IQA) models in black-box settings, this paper proposes the Sign-integrated Gaussian Attack (SIGA). SIGA approximates the gradient of an unknown target model by integrating sign-based gradients from multiple surrogate models and applying Gaussian smoothing for robust gradient estimation. Additionally, it introduces a human visual system–inspired filtering mask to suppress perceptibility of adversarial perturbations. This work presents the first systematic investigation into cross-architecture transferability mechanisms among NR-IQA models. Experiments on the CLIVE dataset demonstrate that SIGA significantly improves black-box attack success rates against diverse state-of-the-art NR-IQA models—including BRISQUE, NIQE, and CNNIQA—while preserving imperceptibility of perturbations. The proposed framework establishes a novel paradigm for evaluating the robustness of NR-IQA models under realistic black-box threats.

Technology Category

Application Category

📝 Abstract
No-Reference Image Quality Assessment (NR-IQA) models play an important role in various real-world applications. Recently, adversarial attacks against NR-IQA models have attracted increasing attention, as they provide valuable insights for revealing model vulnerabilities and guiding robust system design. Some effective attacks have been proposed against NR-IQA models in white-box settings, where the attacker has full access to the target model. However, these attacks often suffer from poor transferability to unknown target models in more realistic black-box scenarios, where the target model is inaccessible. This work makes the first attempt to address the challenge of low transferability in attacking NR-IQA models by proposing a transferable Signed Ensemble Gaussian black-box Attack (SEGA). The main idea is to approximate the gradient of the target model by applying Gaussian smoothing to source models and ensembling their smoothed gradients. To ensure the imperceptibility of adversarial perturbations, SEGA further removes inappropriate perturbations using a specially designed perturbation filter mask. Experimental results on the CLIVE dataset demonstrate the superior transferability of SEGA, validating its effectiveness in enabling successful transfer-based black-box attacks against NR-IQA models.
Problem

Research questions and friction points this paper is trying to address.

Addressing poor transferability of attacks on NR-IQA models in black-box settings
Developing effective black-box attacks when target models are inaccessible
Ensuring imperceptible adversarial perturbations while maintaining attack effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian smoothing approximates target model gradient
Ensembling smoothed gradients from source models
Perturbation filter mask ensures imperceptible adversarial perturbations
🔎 Similar Papers
No similar papers found.
Y
Yujia Liu
NERCVT, School of Computer Science, Peking University, China National Key Laboratory for Multimedia Information Processing, Peking University, China School of Mathematical Sciences, Peking University, China
Dingquan Li
Dingquan Li
Pengcheng Laboratory, 鹏城实验室
Image Quality AssessmentVideo Quality AssessmentPoint Cloud CompressionPerceptual Optimization
Tiejun Huang
Tiejun Huang
Professor,School of Computer Science, Peking University
Visual Information Processing