🤖 AI Summary
Large language models (LLMs) suffer from knowledge inaccuracy and staleness in domain-specific question answering. Method: This paper proposes a hybrid retrieval-augmented generation (RAG) framework that jointly leverages relational databases and unstructured documents—departing from conventional RAG systems reliant solely on unstructured text. It systematically characterizes the complementary patterns between these two knowledge sources and introduces a rule-driven, interpretable routing mechanism. This mechanism integrates query-type analysis, semantic similarity matching, and path-level meta-caching to enable dynamic, efficient, and evolvable knowledge-source selection. Contribution/Results: Through expert-rule evolution and feedback-driven optimization, the framework significantly outperforms both static and learned routing baselines across three domain QA benchmarks, achieving improved accuracy and response efficiency under moderate computational overhead.
📝 Abstract
Large Language Models (LLMs) have shown remarkable performance on general Question Answering (QA), yet they often struggle in domain-specific scenarios where accurate and up-to-date information is required. Retrieval-Augmented Generation (RAG) addresses this limitation by enriching LLMs with external knowledge, but existing systems primarily rely on unstructured documents, while largely overlooking relational databases, which provide precise, timely, and efficiently queryable factual information, serving as indispensable infrastructure in domains such as finance, healthcare, and scientific research. Motivated by this gap, we conduct a systematic analysis that reveals three central observations: (i) databases and documents offer complementary strengths across queries, (ii) naively combining both sources introduces noise and cost without consistent accuracy gains, and (iii) selecting the most suitable source for each query is crucial to balance effectiveness and efficiency. We further observe that query types show consistent regularities in their alignment with retrieval paths, suggesting that routing decisions can be effectively guided by systematic rules that capture these patterns. Building on these insights, we propose a rule-driven routing framework. A routing agent scores candidate augmentation paths based on explicit rules and selects the most suitable one; a rule-making expert agent refines the rules over time using QA feedback to maintain adaptability; and a path-level meta-cache reuses past routing decisions for semantically similar queries to reduce latency and cost. Experiments on three QA benchmarks demonstrate that our framework consistently outperforms static strategies and learned routing baselines, achieving higher accuracy while maintaining moderate computational cost.