🤖 AI Summary
Slides, as multimodal documents, pose significant challenges for retrieval—particularly modality fragmentation and contextual information loss—thereby limiting the effectiveness of retrieval-augmented generation (RAG). To address this, we propose a lightweight and efficient slide retrieval framework. First, we employ vision-language models to generate high-quality captions, replacing raw image embeddings and reducing storage overhead substantially. Second, we systematically evaluate and compare multiple retrieval paradigms: caption-based retrieval, late-interaction visual models (e.g., ColPali), visual re-ranking, and BM25+dense hybrid retrieval. Finally, we fuse results using Reciprocal Rank Fusion. Experimental results demonstrate that the caption-based approach achieves the optimal trade-off among retrieval quality (NDCG@10), inference efficiency, and storage cost—reducing storage by 90% compared to full-image embeddings with less than 2% performance degradation. This provides a practical, deployable solution for slide retrieval in RAG applications.
📝 Abstract
Slide decks, serving as digital reports that bridge the gap between presentation slides and written documents, are a prevalent medium for conveying information in both academic and corporate settings. Their multimodal nature, combining text, images, and charts, presents challenges for retrieval-augmented generation systems, where the quality of retrieval directly impacts downstream performance. Traditional approaches to slide retrieval often involve separate indexing of modalities, which can increase complexity and lose contextual information. This paper investigates various methodologies for effective slide retrieval, including visual late-interaction embedding models like ColPali, the use of visual rerankers, and hybrid retrieval techniques that combine dense retrieval with BM25, further enhanced by textual rerankers and fusion methods like Reciprocal Rank Fusion. A novel Vision-Language Models-based captioning pipeline is also evaluated, demonstrating significantly reduced embedding storage requirements compared to visual late-interaction techniques, alongside comparable retrieval performance. Our analysis extends to the practical aspects of these methods, evaluating their runtime performance and storage demands alongside retrieval efficacy, thus offering practical guidance for the selection and development of efficient and robust slide retrieval systems for real-world applications.