DRO-Augment Framework: Robustness by Synergizing Wasserstein Distributionally Robust Optimization and Data Augmentation

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of deep neural networks (DNNs) against data corruption and adversarial attacks in image classification, this paper proposes DRO-Augment: a novel framework that jointly models Wasserstein distributionally robust optimization (DRO) and diverse data augmentation. It introduces a variational-regularized loss function and derives a tight generalization error upper bound. Theoretical analysis reveals a complementary mechanism between DRO and augmentation under distributional shift. Extensive experiments on benchmarks—including CIFAR-10-C, CIFAR-100-C, MNIST, and Fashion-MNIST—demonstrate that DRO-Augment significantly improves model robustness under corrupted inputs while preserving high accuracy on clean samples. Thus, the method achieves a favorable trade-off between robustness and standard accuracy without sacrificing either.

Technology Category

Application Category

📝 Abstract
In many real-world applications, ensuring the robustness and stability of deep neural networks (DNNs) is crucial, particularly for image classification tasks that encounter various input perturbations. While data augmentation techniques have been widely adopted to enhance the resilience of a trained model against such perturbations, there remains significant room for improvement in robustness against corrupted data and adversarial attacks simultaneously. To address this challenge, we introduce DRO-Augment, a novel framework that integrates Wasserstein Distributionally Robust Optimization (W-DRO) with various data augmentation strategies to improve the robustness of the models significantly across a broad spectrum of corruptions. Our method outperforms existing augmentation methods under severe data perturbations and adversarial attack scenarios while maintaining the accuracy on the clean datasets on a range of benchmark datasets, including but not limited to CIFAR-10-C, CIFAR-100-C, MNIST, and Fashion-MNIST. On the theoretical side, we establish novel generalization error bounds for neural networks trained using a computationally efficient, variation-regularized loss function closely related to the W-DRO problem.
Problem

Research questions and friction points this paper is trying to address.

Enhancing DNN robustness against input perturbations and adversarial attacks
Improving model resilience to corrupted data and adversarial scenarios
Synergizing W-DRO and data augmentation for broad corruption robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates W-DRO with data augmentation
Enhances robustness against corruptions and attacks
Establishes generalization bounds for variation-regularized loss
🔎 Similar Papers
No similar papers found.