🤖 AI Summary
This work addresses the challenge that conventional data augmentation methods often fail to simultaneously preserve task relevance and introduce highly diverse, realistic synthetic data, frequently leading to performance degradation due to mismatched augmentations. To overcome this limitation, the authors propose EvoAug, a novel framework that integrates conditional diffusion models and few-shot NeRF-based generative models with evolutionary algorithms to automatically discover task-specific structured stochastic augmentation trees. This approach enables adaptive, learnable data augmentation strategies tailored to the downstream task. Extensive experiments on fine-grained classification and few-shot learning benchmarks demonstrate that EvoAug significantly improves model performance, validating both the effectiveness and generalization capability of the learned augmentation policies.
📝 Abstract
Data augmentation has long been a cornerstone for reducing overfitting in vision models, with methods like AutoAugment automating the design of task-specific augmentations. Recent advances in generative models, such as conditional diffusion and few-shot NeRFs, offer a new paradigm for data augmentation by synthesizing data with significantly greater diversity and realism. However, unlike traditional augmentations like cropping or rotation, these methods introduce substantial changes that enhance robustness but also risk degrading performance if the augmentations are poorly matched to the task. In this work, we present EvoAug, an automated augmentation learning pipeline, which leverages these generative models alongside an efficient evolutionary algorithm to learn optimal task-specific augmentations. Our pipeline introduces a novel approach to image augmentation that learns stochastic augmentation trees that hierarchically compose augmentations, enabling more structured and adaptive transformations. We demonstrate strong performance across fine-grained classification and few-shot learning tasks. Notably, our pipeline discovers augmentations that align with domain knowledge, even in low-data settings. These results highlight the potential of learned generative augmentations, unlocking new possibilities for robust model training.