🤖 AI Summary
This study addresses the problem that lexical, syntactic, semantic, and reasoning features in large language model (LLM) representations are highly entangled, causing brain encoding models to over-rely on shallow linguistic cues. To mitigate this, we propose a residual decoupling method: leveraging layer-wise LLM features, we iteratively apply residual regression to construct orthogonal, dedicated embeddings for lexical, syntactic, semantic, and reasoning components. This constitutes the first computational framework enabling separable modeling of linguistic and higher-order reasoning representations. We apply these decoupled embeddings to predict human intracranial electrocorticography (ECoG) signals. Results show that the reasoning-dedicated embedding significantly improves neural response prediction accuracy and selectively engages a distributed cortical network—beyond canonical language areas—during the ~350–400 ms post-stimulus time window, revealing a distinct spatiotemporal neural signature of reasoning.
📝 Abstract
Understanding how the human brain progresses from processing simple linguistic inputs to performing high-level reasoning is a fundamental challenge in neuroscience. While modern large language models (LLMs) are increasingly used to model neural responses to language, their internal representations are highly "entangled," mixing information about lexicon, syntax, meaning, and reasoning. This entanglement biases conventional brain encoding analyses toward linguistically shallow features (e.g., lexicon and syntax), making it difficult to isolate the neural substrates of cognitively deeper processes. Here, we introduce a residual disentanglement method that computationally isolates these components. By first probing an LM to identify feature-specific layers, our method iteratively regresses out lower-level representations to produce four nearly orthogonal embeddings for lexicon, syntax, meaning, and, critically, reasoning. We used these disentangled embeddings to model intracranial (ECoG) brain recordings from neurosurgical patients listening to natural speech. We show that: 1) This isolated reasoning embedding exhibits unique predictive power, accounting for variance in neural activity not explained by other linguistic features and even extending to the recruitment of visual regions beyond classical language areas. 2) The neural signature for reasoning is temporally distinct, peaking later (~350-400ms) than signals related to lexicon, syntax, and meaning, consistent with its position atop a processing hierarchy. 3) Standard, non-disentangled LLM embeddings can be misleading, as their predictive success is primarily attributable to linguistically shallow features, masking the more subtle contributions of deeper cognitive processing.