Corpus Frequencies in Morphological Inflection: Do They Matter?

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional morphological inflection treats lemma–tag–form triples uniformly, ignoring their natural frequency distribution in corpora, thereby limiting model generalization. Method: This paper introduces, for the first time, a frequency-aware modeling framework comprising (i) frequency-weighted sampling during training, (ii) lemma-disjoint, frequency-stratified data partitioning, and (iii) a token-level accuracy metric weighted by word frequency. Contribution/Results: Evaluated across 43 languages, the approach yields statistically significant improvements over uniform sampling in 26 languages—particularly enhancing accuracy on high-frequency words. By aligning model learning and evaluation with empirical lexical distributions, the framework improves fidelity to natural text and better serves downstream applications requiring robust performance on frequent, functionally salient forms.

Technology Category

Application Category

📝 Abstract
The traditional approach to morphological inflection (the task of modifying a base word (lemma) to express grammatical categories) has been, for decades, to consider lexical entries of lemma-tag-form triples uniformly, lacking any information about their frequency distribution. However, in production deployment, one might expect the user inputs to reflect a real-world distribution of frequencies in natural texts. With future deployment in mind, we explore the incorporation of corpus frequency information into the task of morphological inflection along three key dimensions during system development: (i) for train-dev-test split, we combine a lemma-disjoint approach, which evaluates the model's generalization capabilities, with a frequency-weighted strategy to better reflect the realistic distribution of items across different frequency bands in training and test sets; (ii) for evaluation, we complement the standard type accuracy (often referred to simply as accuracy), which treats all items equally regardless of frequency, with token accuracy, which assigns greater weight to frequent words and better approximates performance on running text; (iii) for training data sampling, we introduce a method novel in the context of inflection, frequency-aware training, which explicitly incorporates word frequency into the sampling process. We show that frequency-aware training outperforms uniform sampling in 26 out of 43 languages.
Problem

Research questions and friction points this paper is trying to address.

Incorporating corpus frequency into morphological inflection training
Evaluating models with frequency-weighted splits and token accuracy
Developing frequency-aware sampling to improve inflection performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-weighted train-dev-test split strategy
Token accuracy evaluation for frequent words
Frequency-aware training data sampling method
🔎 Similar Papers
No similar papers found.
T
Tomáš Sourada
Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics, Prague, Czech Republic
Jana Straková
Jana Straková
Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics
natural language processingdeep learningnamed entity recognitionopen-source tools