MURE: Hierarchical Multi-Resolution Encoding via Vision-Language Models for Visual Document Retrieval

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in visual document retrieval (VDR) where existing methods struggle to balance fine-grained information preservation with computational efficiency when processing high-resolution documents, often suffering from excessive indexing overhead and retrieval latency due to a large number of visual tokens. To overcome this, the authors propose X-VisEmb, a novel paradigm that leverages a vision-language model as a hierarchical multi-resolution encoder. It integrates resolution-level Matryoshka representation learning with a semantic-aware hierarchical clustering mechanism to enable cross-granularity feature fusion and adaptive compression. The approach significantly improves representation efficiency, achieving superior performance on two mainstream VDR benchmarks while using only half the visual token budget of ColPali.

Technology Category

Application Category

📝 Abstract
Visual Document Retrieval (VDR) requires representations that capture both fine-grained visual details and global document structure to ensure retrieval efficacy while maintaining computational efficiency. Existing VDR models struggle to balance effectiveness and efficiency when processing high-resolution documents: they often either lose fine-grained information or generate an excessive number of visual tokens, resulting in significant indexing overhead and high retrieval latency. In this work, we rethink the visual encoding mechanism and propose a new X-VisEmb paradigm that progresses from multi-resolution sampling and encoding, through cross-granularity feature fusion, to adaptive representation distillation. A preliminary study validates its feasibility and effectiveness in capturing complementary visual cues at varying scales. Building on the insights, we develop MURE, a novel framework that employs VLMs as a hierarchical multi-resolution encoder, integrates resolution-level Matryoshka representation learning (RMRL) for effective feature fusion, and applies a semantic-aware hierarchical clustering mechanism for visual token compression. Experiments on two widely used VDR benchmarks show that our MURE framework consistently beats strong baselines. Furthermore, it significantly outperforms ColPali with only 50% of its visual token budget.
Problem

Research questions and friction points this paper is trying to address.

Visual Document Retrieval
high-resolution documents
visual token efficiency
fine-grained details
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-resolution encoding
Vision-Language Models
Matryoshka representation learning
Visual token compression
Hierarchical clustering
🔎 Similar Papers
No similar papers found.
Fengbin Zhu
Fengbin Zhu
National University of Singapore
NLPIRLLMDocument AIAI + Finance
Z
Zijing Cai
University of Science and Technology of China, China
Y
Yuzhe Wang
University of Science and Technology of China, China
Pengyang Shao
Pengyang Shao
Hefei University of Technology
Recommender SystemsCognitive Diagnosis
W
Wenjie Wang
University of Science and Technology of China, China
F
Fuli Feng
University of Science and Technology of China, China
Richang Hong
Richang Hong
Hefei University of Technology
MultimediaPattern Recognition
Tat-Seng Chua
Tat-Seng Chua
National University of Singapore
Multimedia Information RetrievalLive Social Media Analysis