Estimating the Hallucination Rate of Generative AI

πŸ“… 2024-06-11
πŸ›οΈ Neural Information Processing Systems
πŸ“ˆ Citations: 18
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge of quantifying hallucinations in generative AI under in-context learning (ICL), where ground-truth labels or external verifiers are unavailable. We propose a label-free, verifier-free hallucination rate estimation method. Our core innovation is the first formalization of hallucination as generated responses exhibiting low model likelihood, coupled with a Bayesian posterior predictive framework that models the joint distribution of responses and dataβ€”enabling computationally tractable and interpretable hallucination estimation solely from generated text and its log-probabilities. The method integrates Bayesian modeling, posterior predictive analysis, and LLM output probability calibration. Extensive evaluation across synthetic regression and natural language ICL tasks demonstrates consistent and reliable performance across multiple LLMs and diverse tasks, significantly reducing dependence on costly human annotations or external verification resources.

Technology Category

Application Category

πŸ“ Abstract
This paper presents a method for estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and a prediction question and asked to generate a response. One interpretation of ICL assumes that the CGM computes the posterior predictive of an unknown Bayesian model, which implicitly defines a joint distribution over observable datasets and latent mechanisms. This joint distribution factorizes into two components: the model prior over mechanisms and the model likelihood of datasets given a mechanism. With this perspective, we define a hallucination as a generated response to the prediction question with low model likelihood given the mechanism. We develop a new method that takes an ICL problem and estimates the probability that a CGM will generate a hallucination. Our method only requires generating prediction questions and responses from the CGM and evaluating its response log probability. We empirically evaluate our method using large language models for synthetic regression and natural language ICL tasks.
Problem

Research questions and friction points this paper is trying to address.

Estimating hallucination rates in generative AI
Defining hallucinations via low model likelihood
Developing evaluation method using generated responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Estimates hallucination rate in generative AI
Uses model likelihood to define hallucinations
Requires only generated responses and log probabilities
πŸ”Ž Similar Papers
No similar papers found.
A
A. Jesson
Department of Statistics, Columbia University
Nicolas Beltran-Velez
Nicolas Beltran-Velez
Ph.D Student Columbia University
Machine LearningDeep LearningArtificial Intelligence
Q
Quentin Chu
Department of Computer Science, Columbia University
S
Sweta Karlekar
Department of Computer Science, Columbia University
Jannik Kossen
Jannik Kossen
FAIR, Meta
Yarin Gal
Yarin Gal
Professor of Machine Learning, University of Oxford
Machine LearningArtificial IntelligenceProbability TheoryStatistics
J
John P. Cunningham
Department of Statistics, Columbia University
D
David M. Blei
Department of Computer Science, Columbia University