Towards Statistical Factuality Guarantee for Large Vision-Language Models

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate hallucinated textual outputs inconsistent with input images, hindering their deployment in high-reliability applications. To address this, we propose ConfLVLM—a novel framework that introduces conformal prediction to LVLM factual consistency verification for the first time. ConfLVLM decomposes outputs at the statement level, performs uncertainty-aware hypothesis testing, and applies dynamic filtering to achieve provable hallucination risk control—without distributional assumptions and under few-shot settings. It is model-agnostic, requiring no fine-tuning or retraining of black-box LVLMs. Evaluated on LLaVA-1.5, ConfLVLM reduces statement-level error rates in scene description from 87.8% to 10.0%, achieves a 95.3% true positive rate for hallucination detection, and demonstrates strong generalization across diverse domains including medical report generation and document understanding.

Technology Category

Application Category

📝 Abstract
Advancements in Large Vision-Language Models (LVLMs) have demonstrated promising performance in a variety of vision-language tasks involving image-conditioned free-form text generation. However, growing concerns about hallucinations in LVLMs, where the generated text is inconsistent with the visual context, are becoming a major impediment to deploying these models in applications that demand guaranteed reliability. In this paper, we introduce a framework to address this challenge, ConfLVLM, which is grounded on conformal prediction to achieve finite-sample distribution-free statistical guarantees on the factuality of LVLM output. This framework treats an LVLM as a hypothesis generator, where each generated text detail (or claim) is considered an individual hypothesis. It then applies a statistical hypothesis testing procedure to verify each claim using efficient heuristic uncertainty measures to filter out unreliable claims before returning any responses to users. We conduct extensive experiments covering three representative application domains, including general scene understanding, medical radiology report generation, and document understanding. Remarkably, ConfLVLM reduces the error rate of claims generated by LLaVa-1.5 for scene descriptions from 87.8% to 10.0% by filtering out erroneous claims with a 95.3% true positive rate. Our results further demonstrate that ConfLVLM is highly flexible, and can be applied to any black-box LVLMs paired with any uncertainty measure for any image-conditioned free-form text generation task while providing a rigorous guarantee on controlling the risk of hallucination.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucinations in Large Vision-Language Models (LVLMs).
Ensures statistical guarantees on LVLM output factuality.
Reduces error rates in image-conditioned text generation tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal prediction ensures statistical factuality guarantees.
Efficient heuristic uncertainty measures filter unreliable claims.
Flexible framework applicable to any black-box LVLMs.
🔎 Similar Papers
No similar papers found.
Zhuohang Li
Zhuohang Li
Vanderbilt University
Chao Yan
Chao Yan
Instructor at DBMI, VUMC; CS PhD from Vanderbilt U
AI for medicineSynthetic health dataPrivacyFairness
N
Nicholas J. Jackson
Vanderbilt University
Wendi Cui
Wendi Cui
Intuit, Carnegie Mellon University
LLMMachine LearningSearch
B
Bo Li
University of Illinois Urbana-Champaign
J
Jiaxin Zhang
Intuit, Intuit AI Research
B
Bradley A. Malin
Vanderbilt University, Vanderbilt University Medical Center