Language models as tools for investigating the distinction between possible and impossible natural languages

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates language models (LMs) as cognitive probes to uncover the inductive biases underlying human language acquisition—specifically, how learners distinguish “possible natural languages” from “impossible” ones. Method: We propose a staged, iterative optimization of LM architectures, integrating controlled interventions, configurational analysis, and cross-linguistic learnability assessment to systematically model the boundaries of linguistic possibility. Contribution/Results: For the first time, we repurpose large language models as interpretable cognitive tools—not merely generative systems—demonstrating that optimized LMs reliably discriminate grammatically well-formed yet unlearnable linguistic structures. Empirical results provide computationally grounded, empirically testable evidence for linguistic nativism, bridging formal computational modeling with theories of universal grammar. This advances the scientific study of language by rendering abstract principles of linguistic universals operational, measurable, and falsifiable.

Technology Category

Application Category

📝 Abstract
We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition.
Problem

Research questions and friction points this paper is trying to address.

Language models investigate possible vs impossible natural languages
They uncover inductive biases in human language learning
Refine models to better discriminate and link to cognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language models probe possible vs impossible languages
Iterative refinement of LM architectures for discrimination
Linking LM biases to human language learning
🔎 Similar Papers
No similar papers found.