🤖 AI Summary
Existing vision-based document retrieval (VDR) methods rely on computationally intensive text–image contrastive training of dual encoders, limiting multilingual support and scalability to large-scale document collections. This paper proposes a zero-shot generative VDR framework: leveraging a large vision-language model (VLM), it generates fine-grained textual descriptions from document images, which are then embedded using off-the-shelf text encoders to achieve cross-modal alignment. The approach requires no contrastive learning or parameter fine-tuning, drastically reducing training overhead while natively supporting multilingual documents and enabling seamless scaling to ultra-large corpora. On the ViDoRe-v2 benchmark, our method achieves 63.4% nDCG@5—surpassing the strongest specialized multi-vector encoder—and establishes a new strong baseline for zero-shot, generative VDR.
📝 Abstract
Visual Document Retrieval (VDR) typically operates as text-to-image retrieval using specialized bi-encoders trained to directly embed document images. We revisit a zero-shot generate-and-encode pipeline: a vision-language model first produces a detailed textual description of each document image, which is then embedded by a standard text encoder. On the ViDoRe-v2 benchmark, the method reaches 63.4% nDCG@5, surpassing the strongest specialised multi-vector visual document encoder. It also scales better to large collections and offers broader multilingual coverage. Analysis shows that modern vision-language models capture complex textual and visual cues with sufficient granularity to act as a reusable semantic proxy. By offloading modality alignment to pretrained vision-language models, our approach removes the need for computationally intensive text-image contrastive training and establishes a strong zero-shot baseline for future VDR systems.