🤖 AI Summary
This work investigates whether linguistic structure induces language-specific causal reasoning biases in large language models (LLMs). To this end, we introduce BICAUSE—the first bilingual aligned causal reasoning dataset—and employ attention visualization, inter-layer representational similarity analysis (CKA/RSA), and a cognitive linguistics–inspired evaluation framework to systematically compare LLMs’ attention patterns, word-order preferences, and representation dynamics across Chinese and English. Our key findings are threefold: (1) LLMs exhibit typologically consistent attention biases; (2) rigid transfer of source-language word-order preferences significantly impairs causal reasoning performance in Chinese; and (3) successful causal inference is associated with high cross-lingual convergence of hidden-layer representations, indicating language-invariant semantic abstraction. These results provide the first empirical evidence linking linguistic typology to LLM reasoning biases and offer a novel paradigm for studying cognitive plasticity and structural bias in foundation models.
📝 Abstract
Language is not only a tool for communication but also a medium for human cognition and reasoning. If, as linguistic relativity suggests, the structure of language shapes cognitive patterns, then large language models (LLMs) trained on human language may also internalize the habitual logical structures embedded in different languages. To examine this hypothesis, we introduce BICAUSE, a structured bilingual dataset for causal reasoning, which includes semantically aligned Chinese and English samples in both forward and reversed causal forms. Our study reveals three key findings: (1) LLMs exhibit typologically aligned attention patterns, focusing more on causes and sentence-initial connectives in Chinese, while showing a more balanced distribution in English. (2) Models internalize language-specific preferences for causal word order and often rigidly apply them to atypical inputs, leading to degraded performance, especially in Chinese. (3) When causal reasoning succeeds, model representations converge toward semantically aligned abstractions across languages, indicating a shared understanding beyond surface form. Overall, these results suggest that LLMs not only mimic surface linguistic forms but also internalize the reasoning biases shaped by language. Rooted in cognitive linguistic theory, this phenomenon is for the first time empirically verified through structural analysis of model internals.