🤖 AI Summary
Existing retrieval-augmented generation approaches often treat text and images in multimodal documents as isolated modalities, neglecting the synergistic information embedded in cross-modal semantic alignment and layout consistency. This work proposes a novel multimodal retrieval framework grounded in Bayesian inference and Dempster-Shafer evidence theory, introducing for the first time an evidence fusion mechanism to establish cross-modal mutual verification. By modeling posterior association probabilities that are sensitive to both semantic content and spatial layout, the method enables confidence-aware collaborative optimization across heterogeneous modalities. Extensive experiments demonstrate that the proposed approach significantly outperforms state-of-the-art methods on multiple multimodal benchmarks, substantially enhancing the accuracy and robustness of retrieval-augmented generation in text-and-image-rich scenarios.
📝 Abstract
Retrieval-Augmented Generation (RAG) has become a pivotal paradigm for Large Language Models (LLMs), yet current approaches struggle with visually rich documents by treating text and images as isolated retrieval targets. Existing methods relying solely on cosine similarity often fail to capture the semantic reinforcement provided by cross-modal alignment and layout-induced coherence. To address these limitations, we propose BayesRAG, a novel multimodal retrieval framework grounded in Bayesian inference and Dempster-Shafer evidence theory. Unlike traditional approaches that rank candidates strictly by similarity, BayesRAG models the intrinsic consistency of retrieved candidates across modalities as probabilistic evidence to refine retrieval confidence. Specifically, our method computes the posterior association probability for combinations of multimodal retrieval results, prioritizing text-image pairs that mutually corroborate each other in terms of both semantics and layout. Extensive experiments demonstrate that BayesRAG significantly outperforms state-of-the-art (SOTA) methods on challenging multimodal benchmarks. This study establishes a new paradigm for multimodal retrieval fusion that effectively resolves the isolation of heterogeneous modalities through an evidence fusion mechanism and enhances the robustness of retrieval outcomes. Our code is available at https://github.com/TioeAre/BayesRAG.