๐ค AI Summary
Multimodal large language models (MLLMs) face dual challenges in long-document understanding: severe information interference and high quadratic computational overhead inherent to Transformer architectures. To address these, we propose URaGโa novel framework that, for the first time, explicitly exploits the intrinsic โcoarse-to-fineโ cross-layer attention mechanism within MLLMs. URaG dynamically repurposes early transformer layers as lightweight, end-to-end self-retrieval modules that operate during inference to identify and select salient multimodal (textual and visual) evidence in real time, thereby significantly compressing irrelevant content. Crucially, URaG eliminates the need for external retrieval systems while preserving fine-grained semantics, enabling joint optimization of retrieval and generation. Evaluated on multiple long-document understanding benchmarks, URaG achieves state-of-the-art performance while reducing computational cost by 44%โ56%, demonstrating a principled trade-off between efficiency and accuracy.
๐ Abstract
Recent multimodal large language models (MLLMs) still struggle with long document understanding due to two fundamental challenges: information interference from abundant irrelevant content, and the quadratic computational cost of Transformer-based architectures. Existing approaches primarily fall into two categories: token compression, which sacrifices fine-grained details; and introducing external retrievers, which increase system complexity and prevent end-to-end optimization. To address these issues, we conduct an in-depth analysis and observe that MLLMs exhibit a human-like coarse-to-fine reasoning pattern: early Transformer layers attend broadly across the document, while deeper layers focus on relevant evidence pages. Motivated by this insight, we posit that the inherent evidence localization capabilities of MLLMs can be explicitly leveraged to perform retrieval during the reasoning process, facilitating efficient long document understanding. To this end, we propose URaG, a simple-yet-effective framework that Unifies Retrieval and Generation within a single MLLM. URaG introduces a lightweight cross-modal retrieval module that converts the early Transformer layers into an efficient evidence selector, identifying and preserving the most relevant pages while discarding irrelevant content. This design enables the deeper layers to concentrate computational resources on pertinent information, improving both accuracy and efficiency. Extensive experiments demonstrate that URaG achieves state-of-the-art performance while reducing computational overhead by 44-56%. The code is available at https://github.com/shi-yx/URaG.