Truth-value judgment in language models: belief directions are context sensitive

📅 2024-04-29
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether “truth directions”—latent-space representations of sentence truthfulness in large language models (LLMs)—exhibit context sensitivity and elucidates their underlying mechanisms. Methodologically, we employ directional probing, latent representation editing (causal intervention), cross-layer attribution analysis, and truth-consistency evaluation to systematically quantify truth directions’ responsiveness to supporting/contradictory premises, their stability across layers and contexts, and their causal role in inference. Our key contributions are threefold: (1) truth directions exhibit significant inter-layer heterogeneity and model-specificity; (2) they are highly susceptible to irrelevant contextual interference, with error patterns dynamically varying across network depth, architectural design, and data distribution; and (3) causal intervention confirms that truth directions serve as critical, context-modulated mediators governing inference outcomes. These findings establish a novel, interpretable framework for analyzing how LLMs internally represent truth and perform reasoning.

Technology Category

Application Category

📝 Abstract
Recent work has demonstrated that the latent spaces of large language models (LLMs) contain directions predictive of the truth of sentences. Multiple methods recover such directions and build probes that are described as getting at a model's"knowledge"or"beliefs". We investigate this phenomenon, looking closely at the impact of context on the probes. Our experiments establish where in the LLM the probe's predictions can be described as being conditional on the preceding (related) sentences. Specifically, we quantify the responsiveness of the probes to the presence of (negated) supporting and contradicting sentences, and score the probes on their consistency. We also perform a causal intervention experiment, investigating whether moving the representation of a premise along these belief directions influences the position of the hypothesis along that same direction. We find that the probes we test are generally context sensitive, but that contexts which should not affect the truth often still impact the probe outputs. Our experiments show that the type of errors depend on the layer, the (type of) model, and the kind of data. Finally, our results suggest that belief directions are (one of the) causal mediators in the inference process that incorporates in-context information.
Problem

Research questions and friction points this paper is trying to address.

Investigates context sensitivity of truth-value probes in LLMs
Measures consistency errors in truth-value judgments under varying contexts
Examines causal role of truth directions in model inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probes measure truth sensitivity to context
Causal intervention tests truth-value directions
Layer-dependent errors reveal model inconsistencies
🔎 Similar Papers
No similar papers found.
S
Stefan F. Schouten
Vrije Universiteit Amsterdam
Peter Bloem
Peter Bloem
Vrije Universiteit Amsterdam
Machine LearningSemantic WebKnowledge GraphsKolmogorov ComplexityMinimum Description Length
I
Ilia Markov
Vrije Universiteit Amsterdam
P
Piek Vossen
Vrije Universiteit Amsterdam