Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from OCR hallucinations when processing degraded documents due to visual distortion, compromising factual fidelity in document understanding. Method: We propose a novel paradigm—“visual-uncertainty-aware faithful reasoning”—comprising three components: (1) KIE-HVQA, the first benchmark explicitly designed to evaluate OCR hallucinations in degraded document understanding; (2) a visual uncertainty self-awareness mechanism enabling models to abstain from answering under ambiguous visual inputs, thereby enhancing robustness; and (3) a GRPO-based training framework integrating supervised fine-tuning with a novel reward function that jointly optimizes visual uncertainty modeling and vision–language alignment. Results: On KIE-HVQA, our 7B model achieves a 22% higher hallucination-free accuracy than GPT-4o, while preserving full performance on standard document understanding tasks—demonstrating both effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Recent advancements in multimodal large language models have enhanced document understanding by integrating textual and visual information. However, existing models exhibit incompleteness within their paradigm in real-world scenarios, particularly under visual degradation. In such conditions, the current response paradigm often fails to adequately perceive visual degradation and ambiguity, leading to overreliance on linguistic priors or misaligned visual-textual reasoning. This difficulty in recognizing uncertainty frequently results in the generation of hallucinatory content, especially when a precise answer is not feasible. To better demonstrate and analyze this phenomenon and problem, we propose KIE-HVQA, the first benchmark dedicated to evaluating OCR hallucination in degraded document understanding. This dataset includes test samples spanning identity cards and invoices, with simulated real-world degradations for OCR reliability. This setup allows for evaluating models' capacity, under degraded input, to distinguish reliable visual information and answer accordingly, thereby highlighting the challenge of avoiding hallucination on uncertain data. To achieve vision-faithful reasoning and thereby avoid the aforementioned issues, we further introduce a GRPO-based framework featuring a novel reward mechanism. By incorporating a self-awareness of visual uncertainty and an analysis method that initiates refusal to answer to increase task difficulty within our supervised fine-tuning and reinforcement learning framework, we successfully mitigated hallucinations in ambiguous regions. Experiments on Qwen2.5-VL demonstrate that our 7B-parameter model achieves a 22% absolute improvement in hallucination-free accuracy over GPT-4o on KIE-HVQA and there is no significant performance drop in standard tasks, highlighting both effectiveness and robustness.
Problem

Research questions and friction points this paper is trying to address.

Mitigating OCR hallucinations in multimodal document understanding
Evaluating models' reliability under visual degradation conditions
Enhancing vision-faithful reasoning to avoid uncertain data hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed GRPO-based framework for vision-faithful reasoning
Introduced self-awareness of visual uncertainty
Developed KIE-HVQA benchmark for OCR hallucination evaluation
🔎 Similar Papers
No similar papers found.