🤖 AI Summary
Prior studies comparing morphological typologies—particularly agglutinative versus fusional languages—in tokenization and language modeling suffer from confounding factors, including inconsistent data scales, tokenization strategies, and evaluation metrics, undermining reliable conclusions about relative modeling difficulty.
Method: We propose an unsupervised, intrinsic metric based on token bigram frequency as a gradient proxy for morphological complexity, enabling zero-shot prediction of causal language modeling difficulty without expert annotation. We systematically control for tokenization efficiency, dataset size, and morphological alignment to isolate morphological effects.
Contribution/Results: Through rigorous empirical analysis, we disentangle and mitigate key confounds, demonstrating strong correlation between our metric and actual modeling difficulty across diverse languages. Our approach provides a reproducible, interpretable methodological foundation for investigating the interplay between morphology and neural language model performance.
📝 Abstract
The extent to which individual language characteristics influence tokenization and language modeling is an open question. Differences in morphological systems have been suggested as both unimportant and crucial to consider (Cotterell et al., 2018; Gerz et al., 2018a; Park et al., 2021, inter alia). We argue this conflicting evidence is due to confounding factors in experimental setups, making it hard to compare results and draw conclusions. We identify confounding factors in analyses trying to answer the question of whether, and how, morphology relates to language modeling. Next, we re-assess three hypotheses by Arnett & Bergen (2025) for why modeling agglutinative languages results in higher perplexities than fusional languages: they look at morphological alignment of tokenization, tokenization efficiency, and dataset size. We show that each conclusion includes confounding factors. Finally, we introduce token bigram metrics as an intrinsic way to predict the difficulty of causal language modeling, and find that they are gradient proxies for morphological complexity that do not require expert annotation. Ultimately, we outline necessities to reliably answer whether, and how, morphology relates to language modeling.