🤖 AI Summary
This study investigates the functionalist hypothesis that speakers’ communicative intentions drive syntactic evolution. To this end, it introduces reinforcement learning (RL) into grammatical evolution modeling for the first time, proposing a “message-probability-driven stepwise learning mechanism” to simulate the incremental acquisition of syntactic and semantic composition rules. Methodologically, it integrates probabilistic modeling, numerical simulation, formal language analysis, and empirical historical linguistics—combining analytical derivation with empirical validation. Theoretically, it establishes rigorous mathematical conditions for the emergence of grammatical structure. Methodologically, it bridges formal linguistics and RL. Empirically, it successfully replicates multiple diachronic pathways and validates the model’s explanatory power against two classic cases in English historical linguistics—namely, the decay of the case system and the evolution of auxiliaries—thereby providing a computationally tractable and empirically testable functionalist framework for language change.
📝 Abstract
The evolution of grammatical systems of syntactic and semantic composition is modeled here with a novel application of reinforcement learning theory. To test the functionalist thesis that speakers' expressive purposes shape their language, we include within the model a probability distribution over different messages that could be expressed in a given context. The proposed learning and production algorithm then breaks down language learning into a sequence of simple steps, such that each step benefits from the message probabilities. The results are presented in the form of numerical simulations of language histories and analytic proofs. The potential for applying these mathematical models to the study of natural language is illustrated with two case studies from the history of English.