🤖 AI Summary
This work addresses the high storage overhead of multimodal large language models in visual document retrieval, where representing each page with thousands of visual tokens hinders practical deployment. To overcome this challenge, the authors propose CausalEmbed, the first approach to introduce an autoregressive generation paradigm into multi-vector visual document embeddings. CausalEmbed produces compact, structured representations in the latent space and employs a contrastive training objective with an iterative margin loss. Remarkably, it achieves highly efficient retrieval using only tens of visual tokens—reducing token count by 30–155× compared to prior methods—while maintaining competitive performance across diverse backbones and benchmarks. Furthermore, the method supports flexible scaling at inference time, enabling adaptive trade-offs between efficiency and accuracy.
📝 Abstract
Although Multimodal Large Language Models (MLLMs) have shown remarkable potential in Visual Document Retrieval (VDR) through generating high-quality multi-vector embeddings, the substantial storage overhead caused by representing a page with thousands of visual tokens limits their practicality in real-world applications. To address this challenge, we propose an auto-regressive generation approach, CausalEmbed, for constructing multi-vector embeddings. By incorporating iterative margin loss during contrastive training, CausalEmbed encourages the embedding models to learn compact and well-structured representations. Our method enables efficient VDR tasks using only dozens of visual tokens, achieving a 30-155x reduction in token count while maintaining highly competitive performance across various backbones and benchmarks. Theoretical analysis and empirical results demonstrate the unique advantages of auto-regressive embedding generation in terms of training efficiency and scalability at test time. As a result, CausalEmbed introduces a flexible test-time scaling strategy for multi-vector VDR representations and sheds light on the generative paradigm within multimodal document retrieval. Our code is available at https://github.com/Z1zs/Causal-Embed.