🤖 AI Summary
Existing NLP data augmentation methods primarily focus on increasing sample quantity while neglecting the semantic distributional diversity of augmented samples, leading to model overfitting and degraded generalization. To address this, we propose DoAug—a novel framework that explicitly models *sample distributional diversity* as the core optimization objective of data augmentation. DoAug first selects high-information samples via coreset selection, then employs *diversity-oriented fine-tuning* of large language models to construct a paraphraser that generates semantically consistent, label-preserving, and distributionally diverse paraphrases. The method integrates diversity-aware fine-tuning, semantic consistency constraints, and a differentiable distributional diversity metric. Evaluated on 12 real-world text classification datasets, DoAug achieves an average improvement of 10.52% in downstream task performance, significantly outperforming state-of-the-art augmentation baselines.
📝 Abstract
Data augmentation is an essential technique in natural language processing (NLP) for enriching training datasets by generating diverse samples. This process is crucial for improving the robustness and generalization capabilities of NLP models. However, a significant challenge remains: extit{Insufficient Attention to Sample Distribution Diversity}. Most existing methods focus on increasing the sample numbers while neglecting the sample distribution diversity, which can lead to model overfitting. In response, we explore data augmentation's impact on dataset diversity and propose a extbf{underline{D}}iversity- extbf{underline{o}}riented data extbf{underline{Aug}}mentation framework ( extbf{DoAug}). % (mathscr{DoAug}) Specifically, we utilize a diversity-oriented fine-tuning approach to train an LLM as a diverse paraphraser, which is capable of augmenting textual datasets by generating diversified paraphrases. Then, we apply the LLM paraphraser to a selected coreset of highly informative samples and integrate the paraphrases with the original data to create a more diverse augmented dataset. Finally, we conduct extensive experiments on 12 real-world textual datasets. The results show that our fine-tuned LLM augmenter improves diversity while preserving label consistency, thereby enhancing the robustness and performance of downstream tasks. Specifically, it achieves an average performance gain of (10.52%), surpassing the runner-up baseline with more than three percentage points.