🤖 AI Summary
This study investigates language models (LMs) as cognitive probes to uncover the inductive biases underlying human language acquisition—specifically, how learners distinguish “possible natural languages” from “impossible” ones.
Method: We propose a staged, iterative optimization of LM architectures, integrating controlled interventions, configurational analysis, and cross-linguistic learnability assessment to systematically model the boundaries of linguistic possibility.
Contribution/Results: For the first time, we repurpose large language models as interpretable cognitive tools—not merely generative systems—demonstrating that optimized LMs reliably discriminate grammatically well-formed yet unlearnable linguistic structures. Empirical results provide computationally grounded, empirically testable evidence for linguistic nativism, bridging formal computational modeling with theories of universal grammar. This advances the scientific study of language by rendering abstract principles of linguistic universals operational, measurable, and falsifiable.
📝 Abstract
We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition.