Ambiguity Collapse by LLMs: A Taxonomy of Epistemic Risks

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical cognitive risk in large language models (LLMs): their tendency to collapse semantically ambiguous, value-laden terms into single interpretations, thereby neglecting the human practice of meaning-making through negotiation. Introducing the novel concept of “ambiguity collapse,” the paper proposes a three-tiered framework—encompassing process, output, and ecosystem dimensions—to systematically categorize associated cognitive risks. Drawing on philosophy, linguistics, and human-computer interaction theories, the authors employ case studies to demonstrate real-world harms. Challenging the dominant AI paradigm that prioritizes deterministic outputs, the work advocates for preserving ambiguity as a valuable cognitive resource and outlines multi-level mitigation strategies spanning model training, deployment, interface design, and prompt management, thereby establishing a theoretical foundation for responsibly handling semantic indeterminacy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used to make sense of ambiguous, open-textured, value-laden terms. Platforms routinely rely on LLMs for content moderation, asking them to label text based on disputed concepts like"hate speech"or"incitement"; hiring managers may use LLMs to rank who counts as"qualified"; and AI labs increasingly train models to self-regulate under constitutional-style ambiguous principles such as"biased"or"legitimate". This paper introduces ambiguity collapse: a phenomenon that occurs when an LLM encounters a term that genuinely admits multiple legitimate interpretations, yet produces a singular resolution, in ways that bypass the human practices through which meaning is ordinarily negotiated, contested, and justified. Drawing on interdisciplinary accounts of ambiguity as a productive epistemic resource, we develop a taxonomy of the epistemic risks posed by ambiguity collapse at three levels: process (foreclosing opportunities to deliberate, develop cognitive skills, and shape contested terms), output (distorting the concepts and reasons agents act upon), and ecosystem (reshaping shared vocabularies, interpretive norms, and how concepts evolve over time). We illustrate these risks through three case studies, and conclude by sketching multi-layer mitigation principles spanning training, institutional deployment design, interface affordances, and the management of underspecified prompts, with the goal of designing systems that surface, preserve, and responsibly govern ambiguity.
Problem

Research questions and friction points this paper is trying to address.

ambiguity collapse
epistemic risks
large language models
interpretive norms
value-laden terms
Innovation

Methods, ideas, or system contributions that make the work stand out.

ambiguity collapse
epistemic risk
large language models
conceptual negotiation
value-laden ambiguity
🔎 Similar Papers
No similar papers found.