MemLoss: Enhancing Adversarial Training with Recycling Adversarial Examples

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Balancing adversarial robustness and clean accuracy remains a fundamental challenge in adversarial training. To address this, we propose Memory-based Adversarial Training (MemAT), which introduces a dynamic adversarial example buffer to store and reuse high-value historical adversarial samples across training epochs, coupled with a novel MemLoss objective that jointly optimizes robustness and natural performance. By eliminating redundant adversarial example generation, MemAT significantly improves training efficiency; its buffer scheduling mechanism further enhances sample diversity and effectiveness. On benchmark datasets including CIFAR-10, MemAT achieves state-of-the-art adversarial robustness while boosting clean accuracy by 1.2โ€“2.8 percentage points on averageโ€”marking the first demonstration of concurrent gains in both metrics. This work establishes a new paradigm for efficient, high-performance adversarial training.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we propose a new approach called MemLoss to improve the adversarial training of machine learning models. MemLoss leverages previously generated adversarial examples, referred to as 'Memory Adversarial Examples,' to enhance model robustness and accuracy without compromising performance on clean data. By using these examples across training epochs, MemLoss provides a balanced improvement in both natural accuracy and adversarial robustness. Experimental results on multiple datasets, including CIFAR-10, demonstrate that our method achieves better accuracy compared to existing adversarial training methods while maintaining strong robustness against attacks.
Problem

Research questions and friction points this paper is trying to address.

Improving adversarial training robustness with recycled examples
Balancing natural accuracy and adversarial defense capabilities
Enhancing model performance on clean and attacked data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recycles adversarial examples across epochs
Enhances robustness without clean data loss
Balances natural accuracy and adversarial defense
๐Ÿ”Ž Similar Papers
No similar papers found.