On the Universal Truthfulness Hyperplane Inside LLMs

📅 2024-07-11
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factually incorrect responses to novel inputs, revealing an intrinsic lack of factual discrimination capability. This work systematically investigates, for the first time, whether a generalizable “truth hyperplane”—a linear decision boundary in hidden-layer representations that separates factually correct from incorrect outputs—exists within LLMs. We jointly train a linear classifier on hidden-state representations across 40+ diverse, heterogeneous datasets and evaluate its robustness via cross-task and cross-domain zero-shot transfer. Results demonstrate that training data diversity is markedly more critical than scale for learning stable, generalizable truth-discriminative boundaries; such hyperplanes are pervasive across models and tasks, providing the first empirical evidence that LLMs possess an intrinsic, representation-level mechanism for factual awareness.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) have demonstrated remarkable abilities across various fields, hallucination remains a significant challenge. Recent studies have explored hallucinations through the lens of internal representations, proposing mechanisms to decipher LLMs’ adherence to facts. However, these approaches often fail to generalize to out-of-distribution data, leading to concerns about whether internal representation patterns reflect fundamental factual awareness, or only overfit spurious correlations on the specific datasets. In this work, we investigate whether a universal truthfulness hyperplane that distinguishes the model’s factually correct and incorrect outputs exists within the model. To this end, we scale up the number of training datasets and conduct an extensive evaluation – we train the truthfulness hyperplane on a diverse collection of over 40 datasets and examine its cross-task, cross-domain, and in-domain generalization. Our results indicate that increasing the diversity of the training datasets significantly enhances the performance in all scenarios, while the volume of data samples plays a less critical role. This finding supports the optimistic hypothesis that a universal truthfulness hyperplane may indeed exist within the model, offering promising directions for future research.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Factuality
Novel Situations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fact Judgement Mechanism
Large Language Models
Credibility Enhancement
🔎 Similar Papers
No similar papers found.