Dual-Flow: Transferable Multi-Target, Instance-Agnostic Attacks via In-the-wild Cascading Flow Optimization

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited transferability of instance-agnostic generative methods in black-box multi-objective adversarial attacks, this paper proposes a dual-stream cascaded framework that decouples target-guided optimization from perturbation generation. It introduces an out-of-distribution cascaded training mechanism and implicit velocity field modeling to overcome generator capacity constraints. Additionally, a multi-objective gradient coupling strategy is designed to enable strong cross-model transferability. On the Inception-v3 → ResNet-152 transfer task, the method achieves a 34.58% improvement in attack success rate, while maintaining significant efficacy against robust models, including adversarially trained ones. This work presents the first instance-agnostic, generative black-box attack achieving high transferability across multiple objectives—marking a substantial advancement in practical adversarial threat modeling.

Technology Category

Application Category

📝 Abstract
Adversarial attacks are widely used to evaluate model robustness, and in black-box scenarios, the transferability of these attacks becomes crucial. Existing generator-based attacks have excellent generalization and transferability due to their instance-agnostic nature. However, when training generators for multi-target tasks, the success rate of transfer attacks is relatively low due to the limitations of the model's capacity. To address these challenges, we propose a novel Dual-Flow framework for multi-target instance-agnostic adversarial attacks, utilizing Cascading Distribution Shift Training to develop an adversarial velocity function. Extensive experiments demonstrate that Dual-Flow significantly improves transferability over previous multi-target generative attacks. For example, it increases the success rate from Inception-v3 to ResNet-152 by 34.58%. Furthermore, our attack method, such as adversarially trained models, shows substantially stronger robustness against defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Improving transferability of adversarial attacks
Multi-target instance-agnostic attack framework
Enhancing robustness against defense mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Flow framework enhances transferability
Cascading Distribution Shift Training utilized
Adversarial velocity function improves robustness
🔎 Similar Papers
No similar papers found.
Y
Yixiao Chen
Tsinghua University
Shikun Sun
Shikun Sun
Tsinghua University, Cornell University
Machine LearningGenerative Model
Jianshu Li
Jianshu Li
National University of Singapore
computer visionMachine learningFace analysis
R
Ruoyu Li
Ant Group
Z
Zhe Li
Ant Group
J
Junliang Xing
Tsinghua University