🤖 AI Summary
DocVQA faces challenges including inaccurate answer localization, limited interpretability, and high hallucination risk. To address these, we propose the first multimodal large language model (MLLM) framework supporting explicit answer spatial localization. Our method introduces a novel dual-path architecture—comprising both OCR-dependent and OCR-free branches—and achieves end-to-end tight alignment between linguistic responses and visual spatial annotations. By integrating a text detection module with image annotation injection, the model explicitly encodes textual positional relationships during inference. Evaluation employs both Intersection-over-Union (IoU) and Average Normalized Levenshtein Similarity (ANLS), yielding state-of-the-art performance on the standard DocVQA benchmark. This advancement significantly improves answer traceability, response transparency, and user trust—key requirements for reliable document understanding systems.
📝 Abstract
Document Visual Question Answering (VQA) requires models to interpret textual information within complex visual layouts and comprehend spatial relationships to answer questions based on document images. Existing approaches often lack interpretability and fail to precisely localize answers within the document, hindering users' ability to verify responses and understand the reasoning process. Moreover, standard metrics like Average Normalized Levenshtein Similarity (ANLS) focus on text accuracy but overlook spatial correctness. We introduce DLaVA, a novel method that enhances Multimodal Large Language Models (MLLMs) with answer localization capabilities for Document VQA. Our approach integrates image annotation directly into the MLLM pipeline, improving interpretability by enabling users to trace the model's reasoning. We present both OCR-dependent and OCR-free architectures, with the OCR-free approach eliminating the need for separate text recognition components, thus reducing complexity. To the best of our knowledge, DLaVA is the first approach to introduce answer localization within multimodal QA, marking a significant step forward in enhancing user trust and reducing the risk of AI hallucinations. Our contributions include enhancing interpretability and reliability by grounding responses in spatially annotated visual content, introducing answer localization in MLLMs, proposing a streamlined pipeline that combines an MLLM with a text detection module, and conducting comprehensive evaluations using both textual and spatial accuracy metrics, including Intersection over Union (IoU). Experimental results on standard datasets demonstrate that DLaVA achieves SOTA performance, significantly enhancing model transparency and reliability. Our approach sets a new benchmark for Document VQA, highlighting the critical importance of precise answer localization and model interpretability.