π€ AI Summary
Existing research on transferable adversarial attacks lacks a unified evaluation framework and consistent taxonomic criteria, leading to biased performance comparisons. This work presents a systematic review of hundreds of studies and, for the first time, introduces a six-category classification system for transfer-based attacks. Furthermore, it establishes a standardized benchmark framework to enable fair and reproducible evaluation of attack efficacy. Through extensive experiments, the study identifies common evaluation pitfalls and distills generalizable strategies that enhance transferability. While primarily focused on image classification tasks, the proposed framework also demonstrates preliminary applicability to other vision tasks, thereby advancing the standardization, rigor, and reproducibility of research in transferable adversarial attacks.
π Abstract
Adversarial transferability refers to the capacity of adversarial examples generated on the surrogate model to deceive alternate, unexposed victim models. This property eliminates the need for direct access to the victim model during an attack, thereby raising considerable security concerns in practical applications and attracting substantial research attention recently. In this work, we discern a lack of a standardized framework and criteria for evaluating transfer-based attacks, leading to potentially biased assessments of existing approaches. To rectify this gap, we have conducted an exhaustive review of hundreds of related works, organizing various transfer-based attacks into six distinct categories. Subsequently, we propose a comprehensive framework designed to serve as a benchmark for evaluating these attacks. In addition, we delineate common strategies that enhance adversarial transferability and highlight prevalent issues that could lead to unfair comparisons. Finally, we provide a brief review of transfer-based attacks beyond image classification.