Linguistic Generalizations are not Rules: Impacts on Evaluation of LMs

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a pervasive yet implicit assumption in current language model (LM) evaluation—namely, that natural language adheres to formal symbolic rules—despite robust linguistic evidence that language is constructional, context-dependent, and flexibly generative. Through conceptual analysis and critique grounded in linguistic theory, the authors systematically argue that LMs’ deviations from formal rules do not necessarily indicate failure; rather, such behavior may reflect legitimate generalization over constructional patterns. The study advocates abandoning rule-centrism in favor of an evaluation paradigm rooted in construction grammar and contextualized understanding. Its primary contributions are threefold: (1) the first systematic deconstruction of symbolist presuppositions embedded in LM benchmarks; (2) the proposal of human-like flexible generalization—not rule compliance—as the central evaluative criterion; and (3) a theoretical and methodological foundation for developing more cognitively plausible, empirically grounded NLP evaluation frameworks.

Technology Category

Application Category

📝 Abstract
Linguistic evaluations of how well LMs generalize to produce or understand novel text often implicitly take for granted that natural languages are generated by symbolic rules. Grammaticality is thought to be determined by whether or not sentences obey such rules. Interpretation is believed to be compositionally generated by syntactic rules operating on meaningful words. Semantic parsing is intended to map sentences into formal logic. Failures of LMs to obey strict rules have been taken to reveal that LMs do not produce or understand language like humans. Here we suggest that LMs' failures to obey symbolic rules may be a feature rather than a bug, because natural languages are not based on rules. New utterances are produced and understood by a combination of flexible interrelated and context-dependent schemata or constructions. We encourage researchers to reimagine appropriate benchmarks and analyses that acknowledge the rich flexible generalizations that comprise natural languages.
Problem

Research questions and friction points this paper is trying to address.

LMs' failure to follow symbolic rules may be beneficial
Natural languages rely on flexible, context-dependent schemata
Reimagining benchmarks to capture language's rich generalizations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LMs use flexible schemata
Context-dependent constructions
Reimagined benchmarks
🔎 Similar Papers
No similar papers found.