🤖 AI Summary
This work proposes a novel approach to generating transferable adversarial perturbations by optimizing in the structured latent space of a pretrained Stable Diffusion VAE, rather than in pixel space. The method enforces a soft ℓ∞ constraint at the pixel level after decoding and integrates an Expectation Over Transformation (EOT) strategy—incorporating random scaling, interpolation, and cropping—together with periodic latent Gaussian smoothing. This yields low-frequency, spatially coherent perturbations that are less sensitive to image preprocessing and exhibit significantly improved transferability across diverse CNN and Vision Transformer models. The resulting attacks achieve higher cross-model success rates while maintaining robustness against common image transformations, striking a new balance between visual naturalness and adversarial effectiveness.
📝 Abstract
Adversarial attacks are a central tool for probing the robustness of modern vision models, yet most methods optimize perturbations directly in pixel space under $\ell_\infty$ or $\ell_2$ constraints. While effective in white-box settings, pixel-space optimization often produces high-frequency, texture-like noise that is brittle to common preprocessing (e.g., resizing and cropping) and transfers poorly across architectures. We propose $\textbf{LTA}$ ($\textbf{L}$atent $\textbf{T}$ransfer $\textbf{A}$ttack), a transfer-based attack that instead optimizes perturbations in the latent space of a pretrained Stable Diffusion VAE. Given a clean image, we encode it into a latent code and optimize the latent representation to maximize a surrogate classifier loss, while softly enforcing a pixel-space $\ell_\infty$ budget after decoding. To improve robustness to resolution mismatch and standard input pipelines, we incorporate Expectation Over Transformations (EOT) via randomized resizing, interpolation, and cropping, and apply periodic latent Gaussian smoothing to suppress emerging artifacts and stabilize optimization. Across a suite of CNN and vision-transformer targets, LTA achieves strong transfer attack success while producing spatially coherent, predominantly low-frequency perturbations that differ qualitatively from pixel-space baselines and occupy a distinct point in the transfer-quality trade-off. Our results highlight pretrained generative latent spaces as an effective and structured domain for adversarial optimization, bridging robustness evaluation with modern generative priors.