Language and Thought: The View from LLMs

šŸ“… 2025-05-19
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This paper investigates whether language constitutes an essential prerequisite for human-like reasoning, testing Dennett’s hypothesis that language engenders a novel kind of mind. Method: Rather than training new models, the study adopts a dual philosophical and computational approach to systematically analyze cross-domain reasoning behavior in existing large language models (LLMs) under disembodied conditions—i.e., without grounded perceptual input. Contribution/Results: We formally introduce and substantiate the claim that language’s capacity for abstract representation and efficient symbolic encoding constitutes a foundational cognitive infrastructure enabling computationally tractable complex reasoning. Empirical analysis reveals that LLMs exhibit limited yet robust cross-domain reasoning capabilities. These findings provide the first AI-behavioral evidence supporting the ā€œlanguage-as-mind-reconstructorā€ hypothesis, bridging the explanatory gap between AI performance and evolutionary cognitive theory, and demonstrating language’s constitutive—rather than merely instrumental—role in human cognition.

Technology Category

Application Category

šŸ“ Abstract
Daniel Dennett speculated in *Kinds of Minds* 1996:"Perhaps the kind of mind you get when you add language to it is so different from the kind of mind you can have without language that calling them both minds is a mistake."Recent work in AI can be seen as testing Dennett's thesis by exploring the performance of AI systems with and without linguistic training. I argue that the success of Large Language Models at inferential reasoning, limited though it may be, supports Dennett's radical view about the effect of language on thought. I suggest it is the abstractness and efficiency of linguistic encoding that lies behind the capacity of LLMs to perform inferences across a wide range of domains. In a slogan, language makes inference computationally tractable. I assess what these results in AI indicate about the role of language in the workings of our own biological minds.
Problem

Research questions and friction points this paper is trying to address.

Testing Dennett's thesis on language's impact on mind differentiation
Exploring LLMs' inferential reasoning success across diverse domains
Assessing language's role in biological vs artificial minds
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs test Dennett's thesis on language and thought
Linguistic encoding enables cross-domain inference in LLMs
Language makes inference computationally tractable in AI
šŸ”Ž Similar Papers
No similar papers found.