🤖 AI Summary
Late-interaction mechanisms in visual document retrieval (VDR) lack systematic evaluation regarding reproducibility, replicability, and empirical contribution.
Method: Building on the ColPali architecture, we conduct cross-model ablation studies under OCR-free settings across multiple vision-language models. We complement quantitative evaluation with query–patch attention visualization and robustness testing on text-dense datasets.
Contribution/Results: We demonstrate that late interaction does not rely on explicit text alignment but instead enables implicit semantic matching between query tokens and visually similar image patches. Empirically, it consistently improves mean average precision (mAP) by over 15% on standard benchmarks, while incurring ~3.2× higher inference overhead. It exhibits strong transferability—supporting pure-text queries—and scalable robustness, retaining >92% retrieval accuracy as index size increases. These findings challenge conventional text-centric matching paradigms and provide new empirical grounding for efficient, late-interaction-based VDR modeling.
📝 Abstract
Visual Document Retrieval (VDR) is an emerging research area that focuses on encoding and retrieving document images directly, bypassing the dependence on Optical Character Recognition (OCR) for document search. A recent advance in VDR was introduced by ColPali, which significantly improved retrieval effectiveness through a late interaction mechanism. ColPali's approach demonstrated substantial performance gains over existing baselines that do not use late interaction on an established benchmark. In this study, we investigate the reproducibility and replicability of VDR methods with and without late interaction mechanisms by systematically evaluating their performance across multiple pre-trained vision-language models. Our findings confirm that late interaction yields considerable improvements in retrieval effectiveness; however, it also introduces computational inefficiencies during inference. Additionally, we examine the adaptability of VDR models to textual inputs and assess their robustness across text-intensive datasets within the proposed benchmark, particularly when scaling the indexing mechanism. Furthermore, our research investigates the specific contributions of late interaction by looking into query-patch matching in the context of visual document retrieval. We find that although query tokens cannot explicitly match image patches as in the text retrieval scenario, they tend to match the patch contains visually similar tokens or their surrounding patches.