Representational Stability of Truth in Large Language Models

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the stability of internal representations in large language models (LLMs) when distinguishing between true, false, and truth-value-ambiguous statements. Method: We introduce “representation stability” — a novel metric quantifying robustness of decision boundaries under perturbations to truth-value definitions — and apply linear probing to hidden-layer activations across 16 open-source LLMs. Boundary shifts are measured via controlled label perturbations, distinguishing unfamiliar from familiar false statements. Contribution/Results: Unfamiliar false statements induce up to 40% prediction flips, whereas familiar fictional statements yield ≤8.2% flips, indicating that LLM truth judgments rely more on factual familiarity than linguistic form. These findings reveal a cognitive bias in LLM truth representation — grounded in knowledge memorization rather than logical semantics — and establish an interpretable, representation-based paradigm for evaluating AI trustworthiness.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely used for factual tasks such as "What treats asthma?" or "What is the capital of Latvia?". However, it remains unclear how stably LLMs encode distinctions between true, false, and neither-true-nor-false content in their internal probabilistic representations. We introduce representational stability as the robustness of an LLM's veracity representations to perturbations in the operational definition of truth. We assess representational stability by (i) training a linear probe on an LLM's activations to separate true from not-true statements and (ii) measuring how its learned decision boundary shifts under controlled label changes. Using activations from sixteen open-source models and three factual domains, we compare two types of neither statements. The first are fact-like assertions about entities we believe to be absent from any training data. We call these unfamiliar neither statements. The second are nonfactual claims drawn from well-known fictional contexts. We call these familiar neither statements. The unfamiliar statements induce the largest boundary shifts, producing up to $40%$ flipped truth judgements in fragile domains (such as word definitions), while familiar fictional statements remain more coherently clustered and yield smaller changes ($leq 8.2%$). These results suggest that representational stability stems more from epistemic familiarity than from linguistic form. More broadly, our approach provides a diagnostic for auditing and training LLMs to preserve coherent truth assignments under semantic uncertainty, rather than optimizing for output accuracy alone.
Problem

Research questions and friction points this paper is trying to address.

Assessing stability of truth representations in LLMs under definitional perturbations
Measuring how decision boundaries shift between true/false/unfamiliar content
Evaluating epistemic familiarity versus linguistic form on truth judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing model activations with linear classifiers
Measuring decision boundary shifts under label perturbations
Assessing truth representation stability across domains
🔎 Similar Papers
2024-07-11Conference on Empirical Methods in Natural Language ProcessingCitations: 2
Samantha Dies
Samantha Dies
Northeastern University
C
Courtney Maynard
Khoury College of Computer Sciences, Northeastern University, 440 Huntington Ave, #202, Boston, MA 02115 USA
G
Germans Savcisens
Khoury College of Computer Sciences, Northeastern University, 440 Huntington Ave, #202, Boston, MA 02115 USA
Tina Eliassi-Rad
Tina Eliassi-Rad
Professor & The Inaugural Joseph E. Aoun Chair, Northeastern University
Data MiningMachine LearningNetwork ScienceComplex SystemsAI & Society