ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

📅 2022-03-07
🏛️ Pattern Recognition
📈 Citations: 40
Influential: 1
📄 PDF
🤖 AI Summary
The lack of standardized benchmarks hinders rigorous evaluation of physically realizable adversarial patch robustness. Method: We introduce ImageNet-Patch—the first large-scale, multi-variant adversarial patch benchmark covering all 1,000 ImageNet classes—and systematically incorporate realistic physical perturbations, including random scaling, placement, lighting, and viewpoint variations. We further propose the first standardized robustness evaluation protocol, enabling fair, quantitative comparison across over 20 mainstream models. Contribution/Results: Experiments reveal that state-of-the-art models suffer an average accuracy drop exceeding 40% under patch attacks, exposing critical vulnerabilities. ImageNet-Patch fills a key gap in the field, providing a unified, reproducible infrastructure for developing and evaluating robust training methods against physical-world adversarial patches.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Adversarial Patches
Resistance Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

ImageNet-Patch
Adversarial Patches
Machine Learning Robustness
🔎 Similar Papers
No similar papers found.