Prompt-Guided Internal States for Hallucination Detection of Large Language Models

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak cross-domain generalization of large language model (LLM) hallucination detection, this paper proposes PRISM: a framework that explicitly guides LLMs to activate structural representations correlated with textual veracity via prompt engineering, and aligns these representations across transformer layers. PRISM introduces the first prompt-driven internal activation modulation mechanism—requiring no cross-domain labeled data—and integrates structure-aware feature enhancement with prompt engineering grounded in hidden-layer activations. It is plug-and-play compatible with mainstream detectors (e.g., logit-difference scoring, hidden-state statistics). Evaluated on multi-domain benchmarks, PRISM achieves an average 12.3% F1-score improvement over baselines. Under zero-shot cross-domain transfer, it retains over 86% of in-domain performance, significantly enhancing detector robustness and generalizability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities across a variety of tasks in different domains. However, they sometimes generate responses that are logically coherent but factually incorrect or misleading, which is known as LLM hallucinations. Data-driven supervised methods train hallucination detectors by leveraging the internal states of LLMs, but detectors trained on specific domains often struggle to generalize well to other domains. In this paper, we aim to enhance the cross-domain performance of supervised detectors with only in-domain data. We propose a novel framework, prompt-guided internal states for hallucination detection of LLMs, namely PRISM. By utilizing appropriate prompts to guide changes to the structure related to text truthfulness in LLMs' internal states, we make this structure more salient and consistent across texts from different domains. We integrated our framework with existing hallucination detection methods and conducted experiments on datasets from different domains. The experimental results indicate that our framework significantly enhances the cross-domain generalization of existing hallucination detection methods.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in Large Language Models (LLMs).
Improve cross-domain generalization of hallucination detectors.
Use prompt-guided internal states to enhance detection accuracy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes prompt-guided internal states for detection
Enhances cross-domain generalization with in-domain data
Integrates with existing hallucination detection methods
🔎 Similar Papers
No similar papers found.
F
Fujie Zhang
School of Mathematical Sciences, Nankai University
P
Peiqi Yu
School of Mathematical Sciences, Nankai University
Biao Yi
Biao Yi
Nankai University
LLM SecurityTrustworthy LLMSteganography
Baolei Zhang
Baolei Zhang
Nankai University
T
Tong Li
College of Computer Science, Nankai University
Z
Zheli Liu
College of Computer Science, Nankai University