Massively Multilingual Joint Segmentation and Glossing

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing models struggle to predict intra-word morphological boundaries in automatic morpheme-level glossing, resulting in outputs that lack interpretability and hinder adoption by linguists. To overcome this limitation, we propose PolyGloss—a multilingual sequence-to-sequence model that jointly models morphological segmentation and per-morpheme glossing for the first time within a neural framework, simultaneously predicting morpheme boundaries and their semantic annotations directly from raw text. By integrating an expanded GlossLM training corpus with low-rank adaptation (LoRA), PolyGloss not only surpasses GlossLM in glossing performance but also significantly outperforms existing open-source large language models in segmentation accuracy, task alignment, and cross-lingual transfer capability, thereby enhancing the practicality and reliability of linguistic documentation workflows.

Technology Category

Application Category

📝 Abstract
Automated interlinear gloss prediction with neural networks is a promising approach to accelerate language documentation efforts. However, while state-of-the-art models like GlossLM achieve high scores on glossing benchmarks, user studies with linguists have found critical barriers to the usefulness of such models in real-world scenarios. In particular, existing models typically generate morpheme-level glosses but assign them to whole words without predicting the actual morpheme boundaries, making the predictions less interpretable and thus untrustworthy to human annotators. We conduct the first study on neural models that jointly predict interlinear glosses and the corresponding morphological segmentation from raw text. We run experiments to determine the optimal way to train models that balance segmentation and glossing accuracy, as well as the alignment between the two tasks. We extend the training corpus of GlossLM and pretrain PolyGloss, a family of seq2seq multilingual models for joint segmentation and glossing that outperforms GlossLM on glossing and beats various open-source LLMs on segmentation, glossing, and alignment. In addition, we demonstrate that PolyGloss can be quickly adapted to a new dataset via low-rank adaptation.
Problem

Research questions and friction points this paper is trying to address.

interlinear glossing
morphological segmentation
neural models
language documentation
morpheme boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

joint segmentation and glossing
morphological segmentation
interlinear glossing
multilingual neural model
low-rank adaptation
🔎 Similar Papers
No similar papers found.