OVFact: Measuring and Improving Open-Vocabulary Factuality for Long Caption Models

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (VLMs) frequently generate factually inaccurate long descriptions, yet existing hallucination evaluation methods rely heavily on human annotation and struggle to scale to lengthy, heterogeneous outputs. To address this, we propose OV-Fact: a reference-free framework for assessing factual consistency in long visual descriptions, leveraging open-vocabulary visual grounding and tool-augmented verification. Its core contributions are threefold: (1) the first integration of open-vocabulary localization with structured tool-based validation to jointly quantify richness and factual accuracy; (2) a reference-free metric and an LLM-driven factuality filtering strategy enabling efficient data curation and model training; and (3) empirical validation showing that training on only 20–40% of pretraining data selected by OV-Fact—reducing noise by 2.5–5×—significantly improves factual precision while preserving descriptive diversity across multiple long-description benchmarks.

Technology Category

Application Category

📝 Abstract
Large vision-language models (VLMs) often struggle to generate long and factual captions. However, traditional measures for hallucination and factuality are not well suited for evaluating longer, more diverse captions and in settings where ground-truth human-annotated captions are unavailable. We introduce OV-Fact, a novel method for measuring caption factuality of long captions that leverages open-vocabulary visual grounding and tool-based verification without depending on human annotations. Our method improves agreement with human judgments and captures both caption descriptiveness (recall) and factual precision in the same metric. Furthermore, unlike previous metrics, our reference-free method design enables new applications towards factuality-based data filtering. We observe models trained on an OVFact-filtered (2.5-5x less) subset of a large-scale, noisy (VLM-generated) pretraining set meaningfully improve factuality precision without sacrificing caption descriptiveness across a range of downstream long caption benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Measure factuality in long captions without human annotations
Improve caption descriptiveness and factual precision together
Filter noisy training data to enhance model factuality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-vocabulary visual grounding for factuality measurement
Tool-based verification without human annotations
Reference-free metric enables factuality-based data filtering
🔎 Similar Papers
No similar papers found.