🤖 AI Summary
This work addresses the limitations of existing retrieval-augmented generation (RAG) approaches, which often suffer from retrieving contextually irrelevant content, thereby compromising answer quality. To overcome this, the authors propose Structured Semantic RAG (SSRAG), a novel framework that integrates query augmentation, an intelligent routing mechanism, and a hybrid vector-graph retrieval strategy. SSRAG further incorporates a context unification strategy to enhance both semantic coherence and structural relevance in retrieved results. Evaluated on three benchmark datasets—TruthfulQA, SQuAD, and WikiQA—the method consistently outperforms standard RAG across five prominent large language models, significantly improving the accuracy and informativeness of generated answers.
📝 Abstract
Retrieval-Augmented Generation (RAG) has emerged as a powerful technique for enhancing the quality of responses in Question-Answering (QA) tasks. However, existing approaches often struggle with retrieving contextually relevant information, leading to incomplete or suboptimal answers. In this paper, we introduce Structured-Semantic RAG (SSRAG), a hybrid architecture that enhances QA quality by integrating query augmentation, agentic routing, and a structured retrieval mechanism combining vector and graph based techniques with context unification. By refining retrieval processes and improving contextual grounding, our approach improves both answer accuracy and informativeness. We conduct extensive evaluations on three popular QA datasets, TruthfulQA, SQuAD and WikiQA, across five Large Language Models (LLMs), demonstrating that our proposed approach consistently improves response quality over standard RAG implementations.