🤖 AI Summary
During dynamic training, sample difficulty evolves with the model’s generalization capability, yet mainstream data augmentation methods employ fixed or random transformations that fail to align with the model’s real-time learning needs, thereby limiting generalization. To address this, we propose SADA—a plug-and-play dynamic augmentation method that requires no auxiliary models or policy search. SADA online estimates each sample’s influence on optimization via gradient projection and local temporal variance, enabling sample-aware, demand-driven adaptation of augmentation strength. Its core innovation lies in the first integration of gradient-guided influence estimation with temporal variance modeling, supporting end-to-end incorporation into standard training pipelines. On fine-grained and long-tailed benchmarks, SADA improves accuracy by 7.3% and 4.3%, respectively, significantly outperforming existing augmentation techniques. These results validate both its effectiveness and broad generalizability across diverse data distributions.
📝 Abstract
Data augmentation has been widely employed to improve the generalization of deep neural networks. Most existing methods apply fixed or random transformations. However, we find that sample difficulty evolves along with the model's generalization capabilities in dynamic training environments. As a result, applying uniform or stochastic augmentations, without accounting for such dynamics, can lead to a mismatch between augmented data and the model's evolving training needs, ultimately degrading training effectiveness. To address this, we introduce SADA, a Sample-Aware Dynamic Augmentation that performs on-the-fly adjustment of augmentation strengths based on each sample's evolving influence on model optimization. Specifically, we estimate each sample's influence by projecting its gradient onto the accumulated model update direction and computing the temporal variance within a local training window. Samples with low variance, indicating stable and consistent influence, are augmented more strongly to emphasize diversity, while unstable samples receive milder transformations to preserve semantic fidelity and stabilize learning. Our method is lightweight, which does not require auxiliary models or policy tuning. It can be seamlessly integrated into existing training pipelines as a plug-and-play module. Experiments across various benchmark datasets and model architectures show consistent improvements of SADA, including +7.3% on fine-grained tasks and +4.3% on long-tailed datasets, highlighting the method's effectiveness and practicality.