🤖 AI Summary
This study addresses the insufficient understanding of how systematic relationships between word forms and meanings support efficient communication, particularly due to a lack of detailed accounts of internal formal regularities. The authors propose a novel complexity metric grounded in the learnability of meaning–form mappings, integrating cross-linguistic corpus analysis with computational modeling through an information-theoretic framework informed by language learnability theory. Their model of form–meaning mapping reveals that verb and pronoun systems strike a balance between simplification—reducing grammatical distinctions—and accuracy—ensuring recoverability of meaning. The proposed metric provides a more fine-grained characterization of formal regularities in language, substantially improving explanatory power regarding the distribution of natural language systems and effectively distinguishing attested linguistic systems from artificial or unattested ones.
📝 Abstract
Languages vary widely in how meanings map to word forms. These mappings have been found to support efficient communication; however, this theory does not account for systematic relations within word forms. We examine how a restricted set of grammatical meanings (e.g. person, number) are expressed on verbs and pronouns across typologically diverse languages. Consistent with prior work, we find that verb and pronoun forms are shaped by competing communicative pressures for simplicity (minimizing the inventory of grammatical distinctions) and accuracy (enabling recovery of intended meanings). Crucially, our proposed model uses a novel measure of complexity (inverse of simplicity) based on the learnability of meaning-to-form mappings. This innovation captures fine-grained regularities in linguistic form, allowing better discrimination between attested and unattested systems, and establishes a new connection from efficient communication theory to systematicity in natural language.