🤖 AI Summary
This work investigates the generalization of the “truth direction”—a linear subspace in large language model (LLM) representations associated with factual consistency—across diverse dialogue formats, specifically within linear-probe-based lie detection. We find that the truth direction generalizes robustly when lies appear at the end of dialogues but degrades substantially when lies are positioned earlier, revealing strong sensitivity to dialogue structure. To mitigate this, we propose a simple yet effective intervention: appending a fixed keyword suffix to dialogue endings. This stabilizes the truth direction, enhancing its cross-format transferability. Experiments show that our method improves early-lie detection accuracy by up to 32 percentage points (absolute gain) in long dialogues. This is the first systematic characterization of the format dependence of truth directions in LLMs. Our approach provides an interpretable, lightweight, and deployment-friendly pathway toward building robust, probe-based lie detectors for LLMs.
📝 Abstract
Several recent works argue that LLMs have a universal truth direction where true and false statements are linearly separable in the activation space of the model. It has been demonstrated that linear probes trained on a single hidden state of the model already generalize across a range of topics and might even be used for lie detection in LLM conversations. In this work we explore how this truth direction generalizes between various conversational formats. We find good generalization between short conversations that end on a lie, but poor generalization to longer formats where the lie appears earlier in the input prompt. We propose a solution that significantly improves this type of generalization by adding a fixed key phrase at the end of each conversation. Our results highlight the challenges towards reliable LLM lie detectors that generalize to new settings.