🤖 AI Summary
This study investigates whether truthfulness representations in large language models follow a unified direction or exhibit a spectrum ranging from general to domain-specific patterns. Guided by the proposed truthfulness spectrum hypothesis, we systematically evaluate this hypothesis across five categories of truthfulness and deceptive behaviors using linear probing, Mahalanobis cosine similarity, concept erasure, causal intervention, and post-training geometric analysis. Our findings reveal, for the first time, a spectral structure wherein general and domain-specific truthfulness directions coexist: linear probes generalize well across most domains but fail specifically on sycophantic and expectation-reversal lies; joint training restores performance; geometric distances between representations strongly predict generalization (R²=0.98); and post-training shifts the representation of sycophantic lies significantly away from other truthfulness types.
📝 Abstract
Large language models (LLMs) have been reported to linearly encode truthfulness, yet recent work questions this finding's generality. We reconcile these views with the truthfulness spectrum hypothesis: the representational space contains directions ranging from broadly domain-general to narrowly domain-specific. To test this hypothesis, we systematically evaluate probe generalization across five truth types (definitional, empirical, logical, fictional, and ethical), sycophantic and expectation-inverted lying, and existing honesty benchmarks. Linear probes generalize well across most domains but fail on sycophantic and expectation-inverted lying. Yet training on all domains jointly recovers strong performance, confirming that domain-general directions exist despite poor pairwise transfer. The geometry of probe directions explains these patterns: Mahalanobis cosine similarity between probes near-perfectly predicts cross-domain generalization (R^2=0.98). Concept-erasure methods further isolate truth directions that are (1) domain-general, (2) domain-specific, or (3) shared only across particular domain subsets. Causal interventions reveal that domain-specific directions steer more effectively than domain-general ones. Finally, post-training reshapes truth geometry, pushing sycophantic lying further from other truth types, suggesting a representational basis for chat models'sycophantic tendencies. Together, our results support the truthfulness spectrum hypothesis: truth directions of varying generality coexist in representational space, with post-training reshaping their geometry. Code for all experiments is provided in https://github.com/zfying/truth_spec.