🤖 AI Summary
Existing VD-RAG methods lack fine-grained visual evidence attribution and traceable reasoning processes, rendering predictions unverifiable. To address this, we propose the Chain-of-Evidence paradigm and the Look As You Think reinforcement learning framework, which jointly integrate chain-of-thought reasoning with visual evidence localization. At each reasoning step, our method dynamically binds textual reasoning units to image regions—represented by bounding boxes and page indices—enabling process-level self-verification. Experiments based on Qwen2.5-VL-7B-Instruct demonstrate significant improvements: +8.23% in soft exact match and +47.0% in IoU@0.5, alongside strong cross-domain generalization. Our core contribution is the first integration of fine-grained multimodal evidence attribution into progressive reasoning chains, realized via end-to-end reinforcement learning to achieve verifiable visual document question answering.
📝 Abstract
Aiming to identify precise evidence sources from visual documents, visual evidence attribution for visual document retrieval-augmented generation (VD-RAG) ensures reliable and verifiable predictions from vision-language models (VLMs) in multimodal question answering. Most existing methods adopt end-to-end training to facilitate intuitive answer verification. However, they lack fine-grained supervision and progressive traceability throughout the reasoning process. In this paper, we introduce the Chain-of-Evidence (CoE) paradigm for VD-RAG. CoE unifies Chain-of-Thought (CoT) reasoning and visual evidence attribution by grounding reference elements in reasoning steps to specific regions with bounding boxes and page indexes. To enable VLMs to generate such evidence-grounded reasoning, we propose Look As You Think (LAT), a reinforcement learning framework that trains models to produce verifiable reasoning paths with consistent attribution. During training, LAT evaluates the attribution consistency of each evidence region and provides rewards only when the CoE trajectory yields correct answers, encouraging process-level self-verification. Experiments on vanilla Qwen2.5-VL-7B-Instruct with Paper- and Wiki-VISA benchmarks show that LAT consistently improves the vanilla model in both single- and multi-image settings, yielding average gains of 8.23% in soft exact match (EM) and 47.0% in IoU@0.5. Meanwhile, LAT not only outperforms the supervised fine-tuning baseline, which is trained to directly produce answers with attribution, but also exhibits stronger generalization across domains.