BayesRAG: Probabilistic Mutual Evidence Corroboration for Multimodal Retrieval-Augmented Generation

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing retrieval-augmented generation approaches often treat text and images in multimodal documents as isolated modalities, neglecting the synergistic information embedded in cross-modal semantic alignment and layout consistency. This work proposes a novel multimodal retrieval framework grounded in Bayesian inference and Dempster-Shafer evidence theory, introducing for the first time an evidence fusion mechanism to establish cross-modal mutual verification. By modeling posterior association probabilities that are sensitive to both semantic content and spatial layout, the method enables confidence-aware collaborative optimization across heterogeneous modalities. Extensive experiments demonstrate that the proposed approach significantly outperforms state-of-the-art methods on multiple multimodal benchmarks, substantially enhancing the accuracy and robustness of retrieval-augmented generation in text-and-image-rich scenarios.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) has become a pivotal paradigm for Large Language Models (LLMs), yet current approaches struggle with visually rich documents by treating text and images as isolated retrieval targets. Existing methods relying solely on cosine similarity often fail to capture the semantic reinforcement provided by cross-modal alignment and layout-induced coherence. To address these limitations, we propose BayesRAG, a novel multimodal retrieval framework grounded in Bayesian inference and Dempster-Shafer evidence theory. Unlike traditional approaches that rank candidates strictly by similarity, BayesRAG models the intrinsic consistency of retrieved candidates across modalities as probabilistic evidence to refine retrieval confidence. Specifically, our method computes the posterior association probability for combinations of multimodal retrieval results, prioritizing text-image pairs that mutually corroborate each other in terms of both semantics and layout. Extensive experiments demonstrate that BayesRAG significantly outperforms state-of-the-art (SOTA) methods on challenging multimodal benchmarks. This study establishes a new paradigm for multimodal retrieval fusion that effectively resolves the isolation of heterogeneous modalities through an evidence fusion mechanism and enhances the robustness of retrieval outcomes. Our code is available at https://github.com/TioeAre/BayesRAG.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
Multimodal Retrieval
Cross-modal Alignment
Evidence Fusion
Layout Coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian inference
Dempster-Shafer evidence theory
multimodal retrieval
mutual corroboration
retrieval-augmented generation
🔎 Similar Papers
No similar papers found.
X
Xuan Li
University of Science and Technology of China
Yining Wang
Yining Wang
NLP Reseacher, Unisound
Natural Language ProcessingMachine Translation
H
Haocai Luo
University of Science and Technology of China
S
Shengping Liu
Unisound AI Technology Co.Ltd
J
Jerry Liang
Unisound AI Technology Co.Ltd
Y
Ying Fu
Unisound AI Technology Co.Ltd
W
Weihuang
Unisound AI Technology Co.Ltd
J
Jun Yu
Unisound AI Technology Co.Ltd
Junnan Zhu
Junnan Zhu
Institute of Automation Chinese Academy of Sciences
Natural Language Processing