Neural Probe-Based Hallucination Detection for Large Language Models

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factual hallucinations, hindering their deployment in high-stakes applications. Existing detection methods—relying on uncertainty estimation or external retrieval—suffer from overconfident false negatives, knowledge coverage limitations, and retrieval latency. To address these issues, we propose a lightweight, real-time token-level hallucination detection framework: we freeze the LLM backbone and attach a nonlinear MLP-based neural probe to high-level hidden states—replacing conventional linear probes. A multi-objective joint loss function is introduced to improve discriminative stability and semantic separability. Furthermore, we construct a layer-position–performance response model and employ Bayesian optimization to automatically identify the optimal probing layer. Evaluated on LongFact, HealthBench, and TriviaQA, our method achieves state-of-the-art performance, significantly improving accuracy, recall, and robustness under low false-positive constraints.

Technology Category

Application Category

📝 Abstract
Large language models(LLMs) excel at text generation and knowledge question-answering tasks, but they are prone to generating hallucinated content, severely limiting their application in high-risk domains. Current hallucination detection methods based on uncertainty estimation and external knowledge retrieval suffer from the limitation that they still produce erroneous content at high confidence levels and rely heavily on retrieval efficiency and knowledge coverage. In contrast, probe methods that leverage the model's hidden-layer states offer real-time and lightweight advantages. However, traditional linear probes struggle to capture nonlinear structures in deep semantic spaces.To overcome these limitations, we propose a neural network-based framework for token-level hallucination detection. By freezing language model parameters, we employ lightweight MLP probes to perform nonlinear modeling of high-level hidden states. A multi-objective joint loss function is designed to enhance detection stability and semantic disambiguity. Additionally, we establish a layer position-probe performance response model, using Bayesian optimization to automatically search for optimal probe insertion layers and achieve superior training results.Experimental results on LongFact, HealthBench, and TriviaQA demonstrate that MLP probes significantly outperform state-of-the-art methods in accuracy, recall, and detection capability under low false-positive conditions.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinated content in large language models
Improve real-time lightweight detection using neural probes
Enhance detection accuracy and reduce false positives
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLP probes for nonlinear hallucination detection
Multi-objective loss enhances detection stability
Bayesian optimization finds optimal probe layers
🔎 Similar Papers
No similar papers found.
S
Shize Liang
Faculty of Computing, Harbin Institute of Technology
Hongzhi Wang
Hongzhi Wang
IBM Almaden Research Center
Medical Image Analysis