π€ AI Summary
This paper addresses the challenge of quantifying hallucinations in generative AI under in-context learning (ICL), where ground-truth labels or external verifiers are unavailable. We propose a label-free, verifier-free hallucination rate estimation method. Our core innovation is the first formalization of hallucination as generated responses exhibiting low model likelihood, coupled with a Bayesian posterior predictive framework that models the joint distribution of responses and dataβenabling computationally tractable and interpretable hallucination estimation solely from generated text and its log-probabilities. The method integrates Bayesian modeling, posterior predictive analysis, and LLM output probability calibration. Extensive evaluation across synthetic regression and natural language ICL tasks demonstrates consistent and reliable performance across multiple LLMs and diverse tasks, significantly reducing dependence on costly human annotations or external verification resources.
π Abstract
This paper presents a method for estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and a prediction question and asked to generate a response. One interpretation of ICL assumes that the CGM computes the posterior predictive of an unknown Bayesian model, which implicitly defines a joint distribution over observable datasets and latent mechanisms. This joint distribution factorizes into two components: the model prior over mechanisms and the model likelihood of datasets given a mechanism. With this perspective, we define a hallucination as a generated response to the prediction question with low model likelihood given the mechanism. We develop a new method that takes an ICL problem and estimates the probability that a CGM will generate a hallucination. Our method only requires generating prediction questions and responses from the CGM and evaluating its response log probability. We empirically evaluate our method using large language models for synthetic regression and natural language ICL tasks.