CWoMP: Morpheme Representation Learning for Interlinear Glossing

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of manually producing interlinear glossed text (IGT) in linguistic documentation and the failure of existing automatic approaches to account for the compositional structure of morphemes. It introduces a novel framework that models morphemes as learnable atomic units pairing form and meaning, integrating contrastive pretraining, a contextual encoder, an autoregressive decoder, and a dynamic embedding lexicon. The model aligns words with their constituent morphemes through contrastive learning and enables lexicon expansion at inference time without retraining. Evaluated on multiple low-resource languages, the approach substantially outperforms current methods—particularly in extremely data-scarce settings—while improving computational efficiency and yielding interpretable predictions.

Technology Category

Application Category

📝 Abstract
Interlinear glossed text (IGT) is a standard notation for language documentation which is linguistically rich but laborious to produce manually. Recent automated IGT methods treat glosses as character sequences, neglecting their compositional structure. We propose CWoMP (Contrastive Word-Morpheme Pretraining), which instead treats morphemes as atomic form-meaning units with learned representations. A contrastively trained encoder aligns words-in-context with their constituent morphemes in a shared embedding space; an autoregressive decoder then generates the morpheme sequence by retrieving entries from a mutable lexicon of these embeddings. Predictions are interpretable--grounded in lexicon entries--and users can improve results at inference time by expanding the lexicon without retraining. We evaluate on diverse low-resource languages, showing that CWoMP outperforms existing methods while being significantly more efficient, with particularly strong gains in extremely low-resource settings.
Problem

Research questions and friction points this paper is trying to address.

Interlinear Glossing
Morpheme Representation
Low-resource Languages
Compositional Structure
Automated IGT
Innovation

Methods, ideas, or system contributions that make the work stand out.

morpheme representation
contrastive pretraining
interpretable generation
mutable lexicon
low-resource languages
🔎 Similar Papers
2024-06-21arXiv.orgCitations: 0
2017-08-30Conference on Empirical Methods in Natural Language ProcessingCitations: 73