🤖 AI Summary
This study investigates how the human brain represents linguistic constructions and examines whether artificial neural language models share similar representational mechanisms. By recording EEG responses while participants listened to four syntactic constructions, and applying time-frequency analysis combined with machine learning classification, the research reveals that construction-specific neural signatures predominantly emerge at sentence-final positions, most prominently in the alpha frequency band, with the clearest distinction observed between ditransitive and resultative constructions. A direct comparison of internal representations from recurrent neural networks and Transformer models demonstrates, for the first time at a neural empirical level, a high degree of convergence between human and artificial systems in construction representation. These findings lend support to construction grammar theory and propose a novel perspective that linguistic abstraction is constrained by a “Platonic representational space.”
📝 Abstract
Understanding how the brain processes linguistic constructions is a central challenge in cognitive neuroscience and linguistics. Recent computational studies show that artificial neural language models spontaneously develop differentiated representations of Argument Structure Constructions (ASCs), generating predictions about when and how construction-level information emerges during processing. The present study tests these predictions in human neural activity using electroencephalography (EEG). Ten native English speakers listened to 200 synthetically generated sentences across four construction types (transitive, ditransitive, caused-motion, resultative) while neural responses were recorded. Analyses using time-frequency methods, feature extraction, and machine learning classification revealed construction-specific neural signatures emerging primarily at sentence-final positions, where argument structure becomes fully disambiguated, and most prominently in the alpha band. Pairwise classification showed reliable differentiation, especially between ditransitive and resultative constructions, while other pairs overlapped. Crucially, the temporal emergence and similarity structure of these effects mirror patterns in recurrent and transformer-based language models, where constructional representations arise during integrative processing stages. These findings support the view that linguistic constructions are neurally encoded as distinct form-meaning mappings, in line with Construction Grammar, and suggest convergence between biological and artificial systems on similar representational solutions. More broadly, this convergence is consistent with the idea that learning systems discover stable regions within an underlying representational landscape - recently termed a Platonic representational space - that constrains the emergence of efficient linguistic abstractions.