š¤ AI Summary
This paper investigates whether language constitutes an essential prerequisite for human-like reasoning, testing Dennettās hypothesis that language engenders a novel kind of mind. Method: Rather than training new models, the study adopts a dual philosophical and computational approach to systematically analyze cross-domain reasoning behavior in existing large language models (LLMs) under disembodied conditionsāi.e., without grounded perceptual input. Contribution/Results: We formally introduce and substantiate the claim that languageās capacity for abstract representation and efficient symbolic encoding constitutes a foundational cognitive infrastructure enabling computationally tractable complex reasoning. Empirical analysis reveals that LLMs exhibit limited yet robust cross-domain reasoning capabilities. These findings provide the first AI-behavioral evidence supporting the ālanguage-as-mind-reconstructorā hypothesis, bridging the explanatory gap between AI performance and evolutionary cognitive theory, and demonstrating languageās constitutiveārather than merely instrumentalārole in human cognition.
š Abstract
Daniel Dennett speculated in *Kinds of Minds* 1996:"Perhaps the kind of mind you get when you add language to it is so different from the kind of mind you can have without language that calling them both minds is a mistake."Recent work in AI can be seen as testing Dennett's thesis by exploring the performance of AI systems with and without linguistic training. I argue that the success of Large Language Models at inferential reasoning, limited though it may be, supports Dennett's radical view about the effect of language on thought. I suggest it is the abstractness and efficiency of linguistic encoding that lies behind the capacity of LLMs to perform inferences across a wide range of domains. In a slogan, language makes inference computationally tractable. I assess what these results in AI indicate about the role of language in the workings of our own biological minds.