Dealing with the Evil Twins: Improving Random Augmentation by Addressing Catastrophic Forgetting of Diverse Augmentations

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Random data augmentation, though computationally inexpensive, suffers from inherent stochasticity that induces “augmentation collisions,” distorting feature learning—akin to catastrophic forgetting—and severely impairing out-of-distribution generalization. This work is the first to formally interpret such degradation through the lens of catastrophic forgetting in data augmentation. We propose a lightweight forgetting-mitigation framework: (i) a feature consistency constraint to suppress interference from conflicting augmentations, and (ii) an augmentation trajectory memory mechanism to stabilize representation evolution. Our method introduces no additional parameters, requires no pretraining or architectural modifications, and seamlessly integrates with standard CNNs and ResNets. Evaluated on multiple single-source domain generalization (sDG) benchmarks, it achieves state-of-the-art performance, improving average accuracy by 3.2–5.7% over prior methods—outperforming even computationally intensive advanced augmentation techniques.

Technology Category

Application Category

📝 Abstract
Data augmentation is a promising tool for enhancing out-of-distribution generalization, where the key is to produce diverse, challenging variations of the source domain via costly targeted augmentations that maximize its generalization effect. Conversely, random augmentation is inexpensive but is deemed suboptimal due to its limited effect. In this paper, we revisit random augmentation and explore methods to address its shortcomings. We show that the stochastic nature of random augmentation can produce a set of colliding augmentations that distorts the learned features, similar to catastrophic forgetting. We propose a simple solution that improves the generalization effect of random augmentation by addressing forgetting, which displays strong generalization performance across various single source domain generalization (sDG) benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addressing catastrophic forgetting in random augmentation
Improving generalization effect of random augmentation
Enhancing performance in single source domain generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Addresses catastrophic forgetting in random augmentation
Improves generalization via diverse augmentations
Simple solution for single source domain generalization
🔎 Similar Papers
No similar papers found.