🤖 AI Summary
This work pioneers modeling unconstrained adversarial attacks against diffusion models as an “attacker preference alignment” problem, directly addressing the inherent trade-off between visual fidelity and attack effectiveness—thereby avoiding visual degradation caused by reward hacking in conventional optimization. To this end, we propose a two-stage Preference Alignment via Decoupling (APA) framework: Stage I enforces global consistency through trajectory-level differentiable rewards; Stage II refines per-step attack intensity using step-level differentiable rewards. APA integrates LoRA-based fine-tuning, latent-space gradient updates, surrogate classifier feedback learning, and rule-driven similarity rewards for end-to-end alignment. Evaluated on ImageNet and other benchmarks, APA achieves state-of-the-art black-box transfer attack success rates while preserving SOTA-level visual fidelity—demonstrating unprecedented synergy between imperceptibility and transferability.
📝 Abstract
Preference alignment in diffusion models has primarily focused on benign human preferences (e.g., aesthetic). In this paper, we propose a novel perspective: framing unrestricted adversarial example generation as a problem of aligning with adversary preferences. Unlike benign alignment, adversarial alignment involves two inherently conflicting preferences: visual consistency and attack effectiveness, which often lead to unstable optimization and reward hacking (e.g., reducing visual quality to improve attack success). To address this, we propose APA (Adversary Preferences Alignment), a two-stage framework that decouples conflicting preferences and optimizes each with differentiable rewards. In the first stage, APA fine-tunes LoRA to improve visual consistency using rule-based similarity reward. In the second stage, APA updates either the image latent or prompt embedding based on feedback from a substitute classifier, guided by trajectory-level and step-wise rewards. To enhance black-box transferability, we further incorporate a diffusion augmentation strategy. Experiments demonstrate that APA achieves significantly better attack transferability while maintaining high visual consistency, inspiring further research to approach adversarial attacks from an alignment perspective. Code will be available at https://github.com/deep-kaixun/APA.