Truth Neurons

πŸ“… 2025-05-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) frequently generate hallucinated content, yet the neural mechanisms underlying internal truth representation remain poorly understood, hindering reliability and safety. Method: Leveraging the TruthfulQA benchmark, we systematically investigate truth encoding across multiple mainstream LLMs using causal mediation analysis, targeted neuron activation suppression, and cross-scale (small-to-large) layer-wise distributional statistics. Contribution/Results: We discover and empirically validate β€œtruth neurons”—a class of distributed neurons that are model-agnostic, topic-invariant, subject-agnostic, and task-generalizable, stably present across layers and selectively activated during factual judgment. Selective suppression of these neurons significantly degrades model performance on TruthfulQA, FEVER, and BoolQ, confirming their functional necessity and mechanistic universality. This finding establishes a novel foundation for enhancing model interpretability and controllability through neurobiologically grounded, truth-aligned interventions.

Technology Category

Application Category

πŸ“ Abstract
Despite their remarkable success and deployment across diverse workflows, language models sometimes produce untruthful responses. Our limited understanding of how truthfulness is mechanistically encoded within these models jeopardizes their reliability and safety. In this paper, we propose a method for identifying representations of truthfulness at the neuron level. We show that language models contain truth neurons, which encode truthfulness in a subject-agnostic manner. Experiments conducted across models of varying scales validate the existence of truth neurons, confirming that the encoding of truthfulness at the neuron level is a property shared by many language models. The distribution patterns of truth neurons over layers align with prior findings on the geometry of truthfulness. Selectively suppressing the activations of truth neurons found through the TruthfulQA dataset degrades performance both on TruthfulQA and on other benchmarks, showing that the truthfulness mechanisms are not tied to a specific dataset. Our results offer novel insights into the mechanisms underlying truthfulness in language models and highlight potential directions toward improving their trustworthiness and reliability.
Problem

Research questions and friction points this paper is trying to address.

Identifying neuron-level representations of truthfulness in language models
Validating existence of subject-agnostic truth neurons across model scales
Exploring truthfulness mechanisms to improve model reliability and safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identify truth neurons in language models
Validate truth neurons across model scales
Suppress truth neurons to test truthfulness
πŸ”Ž Similar Papers
No similar papers found.