Enhancing Diffusion-based Unrestricted Adversarial Attacks via Adversary Preferences Alignment

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work pioneers modeling unconstrained adversarial attacks against diffusion models as an “attacker preference alignment” problem, directly addressing the inherent trade-off between visual fidelity and attack effectiveness—thereby avoiding visual degradation caused by reward hacking in conventional optimization. To this end, we propose a two-stage Preference Alignment via Decoupling (APA) framework: Stage I enforces global consistency through trajectory-level differentiable rewards; Stage II refines per-step attack intensity using step-level differentiable rewards. APA integrates LoRA-based fine-tuning, latent-space gradient updates, surrogate classifier feedback learning, and rule-driven similarity rewards for end-to-end alignment. Evaluated on ImageNet and other benchmarks, APA achieves state-of-the-art black-box transfer attack success rates while preserving SOTA-level visual fidelity—demonstrating unprecedented synergy between imperceptibility and transferability.

Technology Category

Application Category

📝 Abstract
Preference alignment in diffusion models has primarily focused on benign human preferences (e.g., aesthetic). In this paper, we propose a novel perspective: framing unrestricted adversarial example generation as a problem of aligning with adversary preferences. Unlike benign alignment, adversarial alignment involves two inherently conflicting preferences: visual consistency and attack effectiveness, which often lead to unstable optimization and reward hacking (e.g., reducing visual quality to improve attack success). To address this, we propose APA (Adversary Preferences Alignment), a two-stage framework that decouples conflicting preferences and optimizes each with differentiable rewards. In the first stage, APA fine-tunes LoRA to improve visual consistency using rule-based similarity reward. In the second stage, APA updates either the image latent or prompt embedding based on feedback from a substitute classifier, guided by trajectory-level and step-wise rewards. To enhance black-box transferability, we further incorporate a diffusion augmentation strategy. Experiments demonstrate that APA achieves significantly better attack transferability while maintaining high visual consistency, inspiring further research to approach adversarial attacks from an alignment perspective. Code will be available at https://github.com/deep-kaixun/APA.
Problem

Research questions and friction points this paper is trying to address.

Aligning adversarial attacks with visual and attack effectiveness preferences
Decoupling conflicting preferences in adversarial example generation
Improving black-box attack transferability while maintaining visual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework decouples conflicting preferences
Uses LoRA for visual consistency optimization
Incorporates diffusion augmentation for black-box transferability
🔎 Similar Papers
No similar papers found.
Kaixun Jiang
Kaixun Jiang
Fudan University
Computer VisionAdversarial Examples
Zhaoyu Chen
Zhaoyu Chen
TikTok
AI SecurityTrustworthy AIMultimodal AIGenerative AI
H
Haijing Guo
Shanghai Key Lab of Intelligent Information Processing, College of Computer Science and Artificial Intelligence, Fudan University
J
Jinglun Li
College of Intelligent Robotics and Advanced Manufacturing, Fudan University
Jiyuan Fu
Jiyuan Fu
Fudan University
Pinxue Guo
Pinxue Guo
Fudan University
Multimodal LLMVideo UnderstandingTracking and Segmentation
H
Hao Tang
Peking University
B
Bo Li
vivo Mobile Communication Co., Ltd
W
Wenqiang Zhang
College of Intelligent Robotics and Advanced Manufacturing, Fudan University, Shanghai Key Lab of Intelligent Information Processing, College of Computer Science and Artificial Intelligence, Fudan University