The Impact of Scaling Training Data on Adversarial Robustness

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates how training data scale and quality affect the adversarial robustness of deep vision models. We conduct large-scale evaluations across datasets ranging from 1.2M to 22B images, 36 mainstream architectures—including supervised, self-supervised, and contrastive learning models—and six black-box attack families (e.g., geometric masking, COCO-object perturbations, ImageNet-C/R). Human visual comparison experiments are further incorporated. Our key findings reveal that adversarial robustness follows a logarithmic scaling law with respect to both data volume and model parameters. Crucially, optimizing data quality, network architecture, and training objectives yields substantially greater robustness gains than scaling data or model size alone: a tenfold increase in data reduces average attack success rate by only 3.2%, whereas a tenfold model size increase reduces it by 13.4%. Notably, high-quality small-scale methods—such as DINOv2—outperform larger, lower-quality alternatives, challenging the “scale-first” paradigm for robustness.

Technology Category

Application Category

📝 Abstract
Deep neural networks remain vulnerable to adversarial examples despite advances in architectures and training paradigms. We investigate how training data characteristics affect adversarial robustness across 36 state-of-the-art vision models spanning supervised, self-supervised, and contrastive learning approaches, trained on datasets from 1.2M to 22B images. Models were evaluated under six black-box attack categories: random perturbations, two types of geometric masks, COCO object manipulations, ImageNet-C corruptions, and ImageNet-R style shifts. Robustness follows a logarithmic scaling law with both data volume and model size: a tenfold increase in data reduces attack success rate (ASR) on average by ~3.2%, whereas a tenfold increase in model size reduces ASR on average by ~13.4%. Notably, some self-supervised models trained on curated datasets, such as DINOv2, outperform others trained on much larger but less curated datasets, challenging the assumption that scale alone drives robustness. Adversarial fine-tuning of ResNet50s improves generalization across structural variations but not across color distributions. Human evaluation reveals persistent gaps between human and machine vision. These results show that while scaling improves robustness, data quality, architecture, and training objectives play a more decisive role than raw scale in achieving broad-spectrum adversarial resilience.
Problem

Research questions and friction points this paper is trying to address.

Investigating how training data scaling affects adversarial robustness in vision models
Evaluating robustness across six black-box attack categories on 36 models
Challenging the assumption that scale alone drives adversarial resilience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaling training data logarithmically improves adversarial robustness
Self-supervised models with curated data outperform larger datasets
Data quality and architecture matter more than raw scale
🔎 Similar Papers
No similar papers found.