URaG: Unified Retrieval and Generation in Multimodal LLMs for Efficient Long Document Understanding

๐Ÿ“… 2025-11-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal large language models (MLLMs) face dual challenges in long-document understanding: severe information interference and high quadratic computational overhead inherent to Transformer architectures. To address these, we propose URaGโ€”a novel framework that, for the first time, explicitly exploits the intrinsic โ€œcoarse-to-fineโ€ cross-layer attention mechanism within MLLMs. URaG dynamically repurposes early transformer layers as lightweight, end-to-end self-retrieval modules that operate during inference to identify and select salient multimodal (textual and visual) evidence in real time, thereby significantly compressing irrelevant content. Crucially, URaG eliminates the need for external retrieval systems while preserving fine-grained semantics, enabling joint optimization of retrieval and generation. Evaluated on multiple long-document understanding benchmarks, URaG achieves state-of-the-art performance while reducing computational cost by 44%โ€“56%, demonstrating a principled trade-off between efficiency and accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent multimodal large language models (MLLMs) still struggle with long document understanding due to two fundamental challenges: information interference from abundant irrelevant content, and the quadratic computational cost of Transformer-based architectures. Existing approaches primarily fall into two categories: token compression, which sacrifices fine-grained details; and introducing external retrievers, which increase system complexity and prevent end-to-end optimization. To address these issues, we conduct an in-depth analysis and observe that MLLMs exhibit a human-like coarse-to-fine reasoning pattern: early Transformer layers attend broadly across the document, while deeper layers focus on relevant evidence pages. Motivated by this insight, we posit that the inherent evidence localization capabilities of MLLMs can be explicitly leveraged to perform retrieval during the reasoning process, facilitating efficient long document understanding. To this end, we propose URaG, a simple-yet-effective framework that Unifies Retrieval and Generation within a single MLLM. URaG introduces a lightweight cross-modal retrieval module that converts the early Transformer layers into an efficient evidence selector, identifying and preserving the most relevant pages while discarding irrelevant content. This design enables the deeper layers to concentrate computational resources on pertinent information, improving both accuracy and efficiency. Extensive experiments demonstrate that URaG achieves state-of-the-art performance while reducing computational overhead by 44-56%. The code is available at https://github.com/shi-yx/URaG.
Problem

Research questions and friction points this paper is trying to address.

Addressing information interference from irrelevant content in long documents
Reducing quadratic computational costs in Transformer-based MLLM architectures
Overcoming limitations of token compression and external retriever approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies retrieval and generation in single MLLM
Converts early layers into efficient evidence selector
Reduces computation while improving accuracy significantly
๐Ÿ”Ž Similar Papers
No similar papers found.