Improving Low-Resource Morphological Inflection via Self-Supervised Objectives

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the scarcity of unlabeled data for character-level morphological inflection in low-resource languages. We systematically investigate the impact of self-supervised objectives on encoder-decoder Transformers, evaluating 13 self-supervised strategies across 19 low-resource languages. Our method introduces a morpheme-boundary-aware masking strategy, integrating linguistic priors into the pretraining objective. Results show that autoencoding excels under extremely low-resource conditions, whereas Conditional Masked Language Modeling (CMLM) progressively outperforms alternatives as supervision increases. Crucially, incorporating morphological structure into masking boosts CMLM’s average accuracy by 4.2% over a strong inductive-bias baseline. This demonstrates that morphology-informed self-supervision significantly enhances both effectiveness and scalability for low-resource sequence generation tasks—particularly inflection—without requiring labeled data or language-specific architectural modifications.

Technology Category

Application Category

📝 Abstract
Self-supervised objectives have driven major advances in NLP by leveraging large-scale unlabeled data, but such resources are scarce for many of the world's languages. Surprisingly, they have not been explored much for character-level tasks, where smaller amounts of data have the potential to be beneficial. We investigate the effectiveness of self-supervised auxiliary tasks for morphological inflection -- a character-level task highly relevant for language documentation -- in extremely low-resource settings, training encoder-decoder transformers for 19 languages and 13 auxiliary objectives. Autoencoding yields the best performance when unlabeled data is very limited, while character masked language modeling (CMLM) becomes more effective as data availability increases. Though objectives with stronger inductive biases influence model predictions intuitively, they rarely outperform standard CMLM. However, sampling masks based on known morpheme boundaries consistently improves performance, highlighting a promising direction for low-resource morphological modeling.
Problem

Research questions and friction points this paper is trying to address.

Exploring self-supervised tasks for low-resource morphological inflection
Evaluating effectiveness of auxiliary objectives in limited data settings
Improving performance via morpheme-aware masking in inflection modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised tasks for low-resource inflection
Autoencoding and CMLM optimize data usage
Morpheme-based masking boosts model performance
🔎 Similar Papers
2017-08-30Conference on Empirical Methods in Natural Language ProcessingCitations: 73
2024-06-21arXiv.orgCitations: 0