Is linguistically-motivated data augmentation worth it?

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the efficacy of linguistically motivated data augmentation for machine translation and part-of-speech tagging in low-resource languages (Uspantek, Arapaho). Addressing the lack of empirical comparison between linguistically naive and linguistically informed augmentation strategies in prior work, we demonstrate—contrary to common assumptions—that the benefit of linguistic constraints hinges not on the “correctness” of linguistic rules, but on the consistency between augmented data and the original training distribution. We evaluate diverse augmentation methods—including random perturbation, morphological rule application, and syntactic template filling—within sequence-to-sequence models. Results show that linguistically grounded augmentation yields substantial gains when distributional alignment is preserved (BLEU +3.2, F1 +4.1), yet underperforms simple perturbations under distributional shift. This finding establishes a critical theoretical principle and practical guideline for data augmentation design in low-resource NLP.

Technology Category

Application Category

📝 Abstract
Data augmentation, a widely-employed technique for addressing data scarcity, involves generating synthetic data examples which are then used to augment available training data. Researchers have seen surprising success from simple methods, such as random perturbations from natural examples, where models seem to benefit even from data with nonsense words, or data that doesn't conform to the rules of the language. A second line of research produces synthetic data that does in fact follow all linguistic constraints; these methods require some linguistic expertise and are generally more challenging to implement. No previous work has done a systematic, empirical comparison of both linguistically-naive and linguistically-motivated data augmentation strategies, leaving uncertainty about whether the additional time and effort of linguistically-motivated data augmentation work in fact yields better downstream performance. In this work, we conduct a careful and comprehensive comparison of augmentation strategies (both linguistically-naive and linguistically-motivated) for two low-resource languages with different morphological properties, Uspanteko and Arapaho. We evaluate the effectiveness of many different strategies and their combinations across two important sequence-to-sequence tasks for low-resource languages: machine translation and interlinear glossing. We find that linguistically-motivated strategies can have benefits over naive approaches, but only when the new examples they produce are not significantly unlike the training data distribution.
Problem

Research questions and friction points this paper is trying to address.

Comparing linguistically-naive vs linguistically-motivated data augmentation strategies
Evaluating augmentation effectiveness for low-resource language tasks
Assessing if linguistic constraints improve downstream model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares linguistically-naive and linguistically-motivated data augmentation
Evaluates augmentation strategies on low-resource languages
Finds benefits when synthetic data matches training distribution
🔎 Similar Papers
No similar papers found.