🤖 AI Summary
Traditional morphological inflection treats lemma–tag–form triples uniformly, ignoring their natural frequency distribution in corpora, thereby limiting model generalization. Method: This paper introduces, for the first time, a frequency-aware modeling framework comprising (i) frequency-weighted sampling during training, (ii) lemma-disjoint, frequency-stratified data partitioning, and (iii) a token-level accuracy metric weighted by word frequency. Contribution/Results: Evaluated across 43 languages, the approach yields statistically significant improvements over uniform sampling in 26 languages—particularly enhancing accuracy on high-frequency words. By aligning model learning and evaluation with empirical lexical distributions, the framework improves fidelity to natural text and better serves downstream applications requiring robust performance on frequent, functionally salient forms.
📝 Abstract
The traditional approach to morphological inflection (the task of modifying a base word (lemma) to express grammatical categories) has been, for decades, to consider lexical entries of lemma-tag-form triples uniformly, lacking any information about their frequency distribution. However, in production deployment, one might expect the user inputs to reflect a real-world distribution of frequencies in natural texts. With future deployment in mind, we explore the incorporation of corpus frequency information into the task of morphological inflection along three key dimensions during system development: (i) for train-dev-test split, we combine a lemma-disjoint approach, which evaluates the model's generalization capabilities, with a frequency-weighted strategy to better reflect the realistic distribution of items across different frequency bands in training and test sets; (ii) for evaluation, we complement the standard type accuracy (often referred to simply as accuracy), which treats all items equally regardless of frequency, with token accuracy, which assigns greater weight to frequent words and better approximates performance on running text; (iii) for training data sampling, we introduce a method novel in the context of inflection, frequency-aware training, which explicitly incorporates word frequency into the sampling process. We show that frequency-aware training outperforms uniform sampling in 26 out of 43 languages.