Which Word Orders Facilitate Length Generalization in LMs? An Investigation with GCG-Based Artificial Languages

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether language models (LMs) possess typology-driven inductive biases—specifically, whether they more readily acquire and generalize to typologically frequent word orders (e.g., SOV, SVO) than to rare or attested-violating structures. Method: The authors construct synthetic languages grounded in Generalized Categorial Grammar (GCG), enabling modeling of non-local dependencies and mild context-sensitivity while systematically controlling word-order typology. Transformer models are trained and evaluated on length generalization under rigorously isolated experimental conditions. Contribution/Results: Typological frequency robustly predicts model generalization difficulty: common word orders yield up to a 2.3× improvement in length extrapolation success rate over rare ones. This provides the first typological evidence for structural inductive biases in LMs. Moreover, the work establishes GCG as a novel, controllable probe framework for modeling complex syntactic phenomena in neural language modeling.

Technology Category

Application Category

📝 Abstract
Whether language models (LMs) have inductive biases that favor typologically frequent grammatical properties over rare, implausible ones has been investigated, typically using artificial languages (ALs) (White and Cotterell, 2021; Kuribayashi et al., 2024). In this paper, we extend these works from two perspectives. First, we extend their context-free AL formalization by adopting Generalized Categorial Grammar (GCG) (Wood, 2014), which allows ALs to cover attested but previously overlooked constructions, such as unbounded dependency and mildly context-sensitive structures. Second, our evaluation focuses more on the generalization ability of LMs to process unseen longer test sentences. Thus, our ALs better capture features of natural languages and our experimental paradigm leads to clearer conclusions -- typologically plausible word orders tend to be easier for LMs to productively generalize.
Problem

Research questions and friction points this paper is trying to address.

Investigating word orders enabling LM length generalization
Using GCG-based artificial languages for broader coverage
Testing typologically plausible word order generalization ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adopts Generalized Categorial Grammar for artificial languages
Evaluates generalization to unseen longer test sentences
Tests typologically plausible word orders in models
🔎 Similar Papers
No similar papers found.