Where is this coming from? Making groundedness count in the evaluation of Document VQA models

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Document VQA evaluation methods overlook semantic plausibility and multimodal grounding (i.e., textual semantics + visual grounding), treating hallucinations and factual errors equivalently—thus failing to reflect models’ true reasoning capabilities. To address this, we propose the first configurable evaluation framework that jointly models semantic consistency and visual spatial localization. Our method integrates OCR text alignment, visual region localization, and joint semantic consistency scoring, with user-controllable weighting parameters validated via human judgments. Experiments demonstrate that our framework significantly improves evaluation discriminability: it achieves human-preferred re-ranking on major benchmarks and effectively identifies hallucinations and reasoning failures. By enabling principled calibration and robust multimodal assessment, our approach establishes a more reliable, interpretable, and human-aligned benchmark for Document VQA evaluation.

Technology Category

Application Category

📝 Abstract
Document Visual Question Answering (VQA) models have evolved at an impressive rate over the past few years, coming close to or matching human performance on some benchmarks. We argue that common evaluation metrics used by popular benchmarks do not account for the semantic and multimodal groundedness of a model's outputs. As a result, hallucinations and major semantic errors are treated the same way as well-grounded outputs, and the evaluation scores do not reflect the reasoning capabilities of the model. In response, we propose a new evaluation methodology that accounts for the groundedness of predictions with regard to the semantic characteristics of the output as well as the multimodal placement of the output within the input document. Our proposed methodology is parameterized in such a way that users can configure the score according to their preferences. We validate our scoring methodology using human judgment and show its potential impact on existing popular leaderboards. Through extensive analyses, we demonstrate that our proposed method produces scores that are a better indicator of a model's robustness and tends to give higher rewards to better-calibrated answers.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Document VQA models' groundedness in outputs
Addressing hallucinations and semantic errors in model evaluations
Proposing a configurable scoring method for robustness assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

New evaluation methodology for Document VQA
Measures semantic and multimodal groundedness
Configurable scoring based on user preferences
🔎 Similar Papers
No similar papers found.
Armineh Nourbakhsh
Armineh Nourbakhsh
Research Director, AI Research, JP Morgan Chase & Co.
Natural Language ProcessingMachine LearningDeep Learning
Siddharth Parekh
Siddharth Parekh
Student, Carnegie Mellon University
Pranav Shetty
Pranav Shetty
AI Research Associate Senior, JPMorgan Chase
Z
Zhao Jin
Language Technologies Institute, Carnegie Mellon University
S
Sameena Shah
J.P. Morgan, New York
C
Carolyn Rose
Language Technologies Institute, Carnegie Mellon University