🤖 AI Summary
The lack of standardized benchmarks hinders rigorous evaluation of physically realizable adversarial patch robustness. Method: We introduce ImageNet-Patch—the first large-scale, multi-variant adversarial patch benchmark covering all 1,000 ImageNet classes—and systematically incorporate realistic physical perturbations, including random scaling, placement, lighting, and viewpoint variations. We further propose the first standardized robustness evaluation protocol, enabling fair, quantitative comparison across over 20 mainstream models. Contribution/Results: Experiments reveal that state-of-the-art models suffer an average accuracy drop exceeding 40% under patch attacks, exposing critical vulnerabilities. ImageNet-Patch fills a key gap in the field, providing a unified, reproducible infrastructure for developing and evaluating robust training methods against physical-world adversarial patches.