🤖 AI Summary
This work addresses the critical challenge of hallucinations in large language model (LLM) outputs, which severely hinder their trustworthy deployment. The authors propose a training-free, black-box hallucination detection method that leverages teacher forcing to obtain token embeddings from generated responses and constructs a Wasserstein distance matrix based on optimal transport theory. From this matrix, they derive two novel metrics—Average Wasserstein Distance (AvgWD) and Eigenvalue-based Wasserstein Distance (EigenWD)—as measures of the intrinsic structural complexity of the generation distribution. This approach is the first to utilize distributional complexity as a signal for detecting hallucinations. Evaluated across multiple models and datasets, the method achieves performance comparable to strong uncertainty-based baselines, with AvgWD and EigenWD demonstrating complementary strengths and collectively validating distributional complexity as an effective indicator of response veracity.
📝 Abstract
Hallucinations in large language models (LLMs) remain a central obstacle to trustworthy deployment, motivating detectors that are accurate, lightweight, and broadly applicable. Since an LLM with a prompt defines a conditional distribution, we argue that the complexity of the distribution is an indicator of hallucination. However, the density of the distribution is unknown and the samples (i.e., responses generated for the prompt) are discrete distributions, which leads to a significant challenge in quantifying the complexity of the distribution. We propose to compute the optimal-transport distances between the sets of token embeddings of pairwise samples, which yields a Wasserstein distance matrix measuring the costs of transforming between the samples. This Wasserstein distance matrix provides a means to quantify the complexity of the distribution defined by the LLM with the prompt. Based on the Wasserstein distance matrix, we derive two complementary signals: AvgWD, measuring the average cost, and EigenWD, measuring the cost complexity. This leads to a training-free detector for hallucinations in LLMs. We further extend the framework to black-box LLMs via teacher forcing with an accessible teacher model. Experiments show that AvgWD and EigenWD are competitive with strong uncertainty baselines and provide complementary behavior across models and datasets, highlighting distribution complexity as an effective signal for LLM truthfulness.