🤖 AI Summary
Inconsistent and unreliable adversarial robustness evaluations arise from model mismatch, non-verifiable implementations, and unequal computational budgets. To address these issues, this paper introduces AttackBench—a standardized benchmarking framework. AttackBench unifies evaluation using gradient-based attacks, a curated set of standard models, and fully reproducible implementations; it further proposes a novel optimality-based metric and strictly controls experimental conditions to ensure fair comparisons. The framework enables trustworthy ranking of mainstream attack methods, systematically identifies sources of bias in existing evaluations, and significantly improves the reproducibility and credibility of robustness verification. Its modular architecture supports continuous extension and benchmark updates, providing a reliable, open evaluation infrastructure for adversarial robustness research.
📝 Abstract
Despite significant progress in designing powerful adversarial evasion attacks for robustness verification, the evaluation of these methods often remains inconsistent and unreliable. Many assessments rely on mismatched models, unverified implementations, and uneven computational budgets, which can lead to biased results and a false sense of security. Consequently, robustness claims built on such flawed testing protocols may be misleading and give a false sense of security. As a concrete step toward improving evaluation reliability, we present AttackBench, a benchmark framework developed to assess the effectiveness of gradient-based attacks under standardized and reproducible conditions. AttackBench serves as an evaluation tool that ranks existing attack implementations based on a novel optimality metric, which enables researchers and practitioners to identify the most reliable and effective attack for use in subsequent robustness evaluations. The framework enforces consistent testing conditions and enables continuous updates, making it a reliable foundation for robustness verification.