🤖 AI Summary
To address insufficient information utilization and the difficulty of adaptively selecting optimal retrieval granularity in RAG systems—caused by structural heterogeneity across multi-source knowledge bases—this paper proposes Mix-of-Granularity (MoG). MoG introduces a query-driven granularity router and, for the first time, employs a soft-label loss function to enable end-to-end trainable, adaptive text chunking. We further extend MoG into the MoG-Graph framework, which models documents as graphs and leverages graph neural networks to support joint retrieval across granularities and over long-distance segments. Evaluated on multiple RAG benchmarks, both MoG and MoG-Graph achieve significant improvements in answer accuracy, empirically validating the effectiveness of dynamic granularity selection and graph-structured retrieval. The implementation is publicly available.
📝 Abstract
Integrating information from various reference databases is a major challenge for Retrieval-Augmented Generation (RAG) systems because each knowledge source adopts a unique data structure and follows different conventions. Retrieving from multiple knowledge sources with one fixed strategy usually leads to under-exploitation of information. To mitigate this drawback, inspired by Mix-of-Expert, we introduce Mix-of-Granularity (MoG), a method that dynamically determines the optimal granularity of a knowledge source based on input queries using a router. The router is efficiently trained with a newly proposed loss function employing soft labels. We further extend MoG to MoG-Graph (MoGG), where reference documents are pre-processed as graphs, enabling the retrieval of distantly situated snippets. Experiments demonstrate that MoG and MoGG effectively predict optimal granularity levels, significantly enhancing the performance of the RAG system in downstream tasks. The code of both MoG and MoGG are released in https://github.com/ZGChung/Mix-of-Granularity.