Enhancing Adversarial Transferability through Block Stretch and Shrink

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak cross-model transferability of existing input-transformation-based adversarial attacks, this paper proposes a block-wise geometric transformation strategy: an image is partitioned into local blocks, each independently subjected to stretching and compression operations, thereby enhancing input diversity while preserving global semantic integrity. By perturbing the feature-space distribution, this strategy effectively reduces attention map discrepancies between source and target models. Furthermore, the paper introduces a unified evaluation metric for input transformations, enabling fair cross-method performance comparison. Experiments on an ImageNet subset demonstrate that the proposed method consistently outperforms state-of-the-art input-transformation attacks—including Input-aware, DIM, and Admix—across diverse white-box and black-box transfer attack settings. These results validate its effectiveness and generalizability in improving adversarial transferability.

Technology Category

Application Category

📝 Abstract
Adversarial attacks introduce small, deliberately crafted perturbations that mislead neural networks, and their transferability from white-box to black-box target models remains a critical research focus. Input transformation-based attacks are a subfield of adversarial attacks that enhance input diversity through input transformations to improve the transferability of adversarial examples. However, existing input transformation-based attacks tend to exhibit limited cross-model transferability. Previous studies have shown that high transferability is associated with diverse attention heatmaps and the preservation of global semantics in transformed inputs. Motivated by this observation, we propose Block Stretch and Shrink (BSS), a method that divides an image into blocks and applies stretch and shrink operations to these blocks, thereby diversifying attention heatmaps in transformed inputs while maintaining their global semantics. Empirical evaluations on a subset of ImageNet demonstrate that BSS outperforms existing input transformation-based attack methods in terms of transferability. Furthermore, we examine the impact of the number scale, defined as the number of transformed inputs, in input transformation-based attacks, and advocate evaluating these methods under a unified number scale to enable fair and comparable assessments.
Problem

Research questions and friction points this paper is trying to address.

Improving adversarial example transferability from white-box to black-box models
Addressing limited cross-model transferability in input transformation attacks
Enhancing attention heatmap diversity while preserving global semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divides images into blocks for transformation operations
Stretches and shrinks blocks to diversify attention heatmaps
Maintains global semantics while enhancing adversarial transferability
🔎 Similar Papers
No similar papers found.
Q
Quan Liu
FJNU
F
Feng Ye
FJNU
Chenhao Lu
Chenhao Lu
Tsinghua University
Artificial Intelligence
S
Shuming Zhen
FJNU
G
Guanliang Huang
FJNU
L
Lunzhe Chen
FJNU
X
Xudong Ke
FJNU