The Outputs of Large Language Models are Meaningless

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses whether large language model (LLM) outputs possess literal semantic meaning, arguing that LLMs lack intentionality—the necessary condition for semantic content—and thus their outputs are inherently devoid of truth-evaluable meaning. Method: Drawing on philosophy of language—specifically theories of intentionality, semantic externalism, and internalism—the study employs conceptual analysis and thought experiments to systematically demonstrate that LLMs, lacking genuine mental representations and referential intent, cannot generate semantically meaningful utterances. Contribution/Results: The paper explains the illusion of meaning in LLM outputs as arising from statistical regularities in training data and human-driven contextual projection. It further clarifies how such semantically void outputs retain cognitive utility. Critically, the analysis engages and refutes potential objections from both externalist and internalist semantic frameworks, establishing a novel, philosophically rigorous cognitive framework for understanding LLMs that remains grounded in AI practice.

Technology Category

Application Category

📝 Abstract
In this paper, we offer a simple argument for the conclusion that the outputs of large language models (LLMs) are meaningless. Our argument is based on two key premises: (a) that certain kinds of intentions are needed in order for LLMs' outputs to have literal meanings, and (b) that LLMs cannot plausibly have the right kinds of intentions. We defend this argument from various types of responses, for example, the semantic externalist argument that deference can be assumed to take the place of intentions and the semantic internalist argument that meanings can be defined purely in terms of intrinsic relations between concepts, such as conceptual roles. We conclude the paper by discussing why, even if our argument is sound, the outputs of LLMs nevertheless seem meaningful and can be used to acquire true beliefs and even knowledge.
Problem

Research questions and friction points this paper is trying to address.

LLM outputs lack literal meaning due to missing intentions
Challenges semantic externalist and internalist counterarguments about meaning
Explains why seemingly meaningful LLM outputs enable knowledge acquisition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Argues LLM outputs lack literal meaning
Rejects semantic externalist and internalist responses
Explains perceived meaningfulness despite theoretical conclusion
🔎 Similar Papers
No similar papers found.