🤖 AI Summary
This study systematically investigates the impact of image-level duplication in training data on the generalization and robustness of deep image classifiers. Using CIFAR-10/100 and ImageNet subsets, we conduct standard and PGD-based adversarial training under controlled uniform and non-uniform duplication patterns, followed by ablation studies and comprehensive generalization evaluations. Our key findings—first empirically established in vision—show that image duplication degrades both clean accuracy (by up to 3.2%) and adversarial robustness, with no performance benefit even under uniform duplication; it also slows training convergence. These results fill a critical theoretical and empirical gap regarding the effect of data deduplication on model robustness in computer vision. We demonstrate that deduplication consistently improves model performance, robustness, and training efficiency, providing principled guidance for constructing high-quality, duplication-free training datasets.
📝 Abstract
The accuracy and robustness of machine learning models against adversarial attacks are significantly influenced by factors such as training data quality, model architecture, the training process, and the deployment environment. In recent years, duplicated data in training sets, especially in language models, has attracted considerable attention. It has been shown that deduplication enhances both training performance and model accuracy in language models. While the importance of data quality in training image classifier Deep Neural Networks (DNNs) is widely recognized, the impact of duplicated images in the training set on model generalization and performance has received little attention. In this paper, we address this gap and provide a comprehensive study on the effect of duplicates in image classification. Our analysis indicates that the presence of duplicated images in the training set not only negatively affects the efficiency of model training but also may result in lower accuracy of the image classifier. This negative impact of duplication on accuracy is particularly evident when duplicated data is non-uniform across classes or when duplication, whether uniform or non-uniform, occurs in the training set of an adversarially trained model. Even when duplicated samples are selected in a uniform way, increasing the amount of duplication does not lead to a significant improvement in accuracy.