🤖 AI Summary
In real-world applications, cross-modal and multi-granularity knowledge demands cannot be adequately addressed by unimodal or homogeneous RAG systems. To bridge this gap, we propose the first general-purpose RAG framework tailored for heterogeneous multimodal and multi-granularity knowledge sources. Our method introduces two core innovations: (1) a modality-aware routing mechanism that enables query-driven, dynamic selection among diverse modalities; and (2) a hierarchical granularity indexing scheme coupled with cross-modal alignment training, jointly mitigating modality gaps and granularity mismatches. The framework integrates modality-specific encoders with a dynamic routing decision module to support fine-grained, cross-modal knowledge retrieval and fusion. Evaluated on eight cross-modal benchmarks, our approach significantly outperforms both unimodal and unified-embedding baselines, achieving substantial improvements in factual accuracy and response adaptability.
📝 Abstract
Retrieval-Augmented Generation (RAG) has shown substantial promise in improving factual accuracy by grounding model responses with external knowledge relevant to queries. However, most existing RAG approaches are limited to a text-only corpus, and while recent efforts have extended RAG to other modalities such as images and videos, they typically operate over a single modality-specific corpus. In contrast, real-world queries vary widely in the type of knowledge they require, which a single type of knowledge source cannot address. To address this, we introduce UniversalRAG, a novel RAG framework designed to retrieve and integrate knowledge from heterogeneous sources with diverse modalities and granularities. Specifically, motivated by the observation that forcing all modalities into a unified representation space derived from a single combined corpus causes a modality gap, where the retrieval tends to favor items from the same modality as the query, we propose a modality-aware routing mechanism that dynamically identifies the most appropriate modality-specific corpus and performs targeted retrieval within it. Also, beyond modality, we organize each modality into multiple granularity levels, enabling fine-tuned retrieval tailored to the complexity and scope of the query. We validate UniversalRAG on 8 benchmarks spanning multiple modalities, showing its superiority over modality-specific and unified baselines.