🤖 AI Summary
This study investigates whether “truth directions”—latent-space representations of sentence truthfulness in large language models (LLMs)—exhibit context sensitivity and elucidates their underlying mechanisms. Methodologically, we employ directional probing, latent representation editing (causal intervention), cross-layer attribution analysis, and truth-consistency evaluation to systematically quantify truth directions’ responsiveness to supporting/contradictory premises, their stability across layers and contexts, and their causal role in inference. Our key contributions are threefold: (1) truth directions exhibit significant inter-layer heterogeneity and model-specificity; (2) they are highly susceptible to irrelevant contextual interference, with error patterns dynamically varying across network depth, architectural design, and data distribution; and (3) causal intervention confirms that truth directions serve as critical, context-modulated mediators governing inference outcomes. These findings establish a novel, interpretable framework for analyzing how LLMs internally represent truth and perform reasoning.
📝 Abstract
Recent work has demonstrated that the latent spaces of large language models (LLMs) contain directions predictive of the truth of sentences. Multiple methods recover such directions and build probes that are described as getting at a model's"knowledge"or"beliefs". We investigate this phenomenon, looking closely at the impact of context on the probes. Our experiments establish where in the LLM the probe's predictions can be described as being conditional on the preceding (related) sentences. Specifically, we quantify the responsiveness of the probes to the presence of (negated) supporting and contradicting sentences, and score the probes on their consistency. We also perform a causal intervention experiment, investigating whether moving the representation of a premise along these belief directions influences the position of the hypothesis along that same direction. We find that the probes we test are generally context sensitive, but that contexts which should not affect the truth often still impact the probe outputs. Our experiments show that the type of errors depend on the layer, the (type of) model, and the kind of data. Finally, our results suggest that belief directions are (one of the) causal mediators in the inference process that incorporates in-context information.