Flexing in 73 Languages: A Single Small Model for Multilingual Inflection

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing work lacks open-source, general-purpose, multilingual morphological inflection systems—particularly for highly inflected languages like Czech, where out-of-vocabulary (OOV) word generation remains challenging. This paper introduces the first open-source, lightweight multilingual joint inflection model that uniformly models lemma–tag–form triples across 73 languages. We propose a frequency-weighted, non-overlapping tokenization strategy and integrate Universal Dependencies (UD) treebank annotations with lexical frequency statistics within a shared-parameter architecture. Compared to monolingual baselines, our model achieves superior performance across most languages, significantly improving cross-lingual generalization and OOV word generation quality. Moreover, it substantially reduces deployment complexity and enables efficient inference.

Technology Category

Application Category

📝 Abstract
We present a compact, single-model approach to multilingual inflection, the task of generating inflected word forms from base lemmas to express grammatical categories. Our model, trained jointly on data from 73 languages, is lightweight, robust to unseen words, and outperforms monolingual baselines in most languages. This demonstrates the effectiveness of multilingual modeling for inflection and highlights its practical benefits: simplifying deployment by eliminating the need to manage and retrain dozens of separate monolingual models. In addition to the standard SIGMORPHON shared task benchmarks, we evaluate our monolingual and multilingual models on 73 Universal Dependencies (UD) treebanks, extracting lemma-tag-form triples and their frequency counts. To ensure realistic data splits, we introduce a novel frequency-weighted, lemma-disjoint train-dev-test resampling procedure. Our work addresses the lack of an open-source, general-purpose, multilingual morphological inflection system capable of handling unseen words across a wide range of languages, including Czech. All code is publicly released at: https://github.com/tomsouri/multilingual-inflection.
Problem

Research questions and friction points this paper is trying to address.

Develops a compact multilingual model for generating inflected word forms
Addresses the lack of open-source inflection systems for 73 languages
Solves the need to handle unseen words across diverse languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single lightweight model handles 73 languages inflection
Uses frequency-weighted lemma-disjoint data splitting method
Joint training enables robust handling of unseen words
🔎 Similar Papers
No similar papers found.
T
Tomáš Sourada
Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics, Prague, Czech Republic
Jana Straková
Jana Straková
Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics
natural language processingdeep learningnamed entity recognitionopen-source tools