🤖 AI Summary
This work addresses the challenge posed by multi-source heterogeneous retrieval documents, whose inconsistencies in style, format, and granularity often introduce redundant or irrelevant context, thereby compromising the factual consistency of generated answers. To mitigate this issue, the paper proposes a concept-oriented context reconstruction approach that leverages Abstract Meaning Representation (AMR) to design a concept distillation algorithm. This algorithm extracts and integrates core semantic concepts from multiple documents, reconstructing them into a knowledge-dense and structurally unified context. Linguistically grounded and modular by design, the method effectively preserves essential knowledge while completing necessary syntactic structures. Empirical results demonstrate significant performance gains over existing approaches on the PopQA and EntityQuestions benchmarks, with consistent robustness and generalizability across diverse large language model backbones.
📝 Abstract
Retrieval-augmented generation (RAG) has shown promising results in enhancing Q&A by incorporating information from the web and other external sources. However, the supporting documents retrieved from the heterogeneous web often originate from multiple sources with diverse writing styles, varying formats, and inconsistent granularity. Fusing such multi-source documents into a coherent and knowledge-intensive context remains a significant challenge, as the presence of irrelevant and redundant information can compromise the factual consistency of the inferred answers. This paper proposes the Concept-oriented Context Reconstruction RAG (CoCR-RAG), a framework that addresses the multi-source information fusion problem in RAG through linguistically grounded concept-level integration. Specifically, we introduce a concept distillation algorithm that extracts essential concepts from Abstract Meaning Representation (AMR), a stable semantic representation that structures the meaning of texts as logical graphs. The distilled concepts from multiple retrieved documents are then fused and reconstructed into a unified, information-intensive context by Large Language Models, which supplement only the necessary sentence elements to highlight the core knowledge. Experiments on the PopQA and EntityQuestions datasets demonstrate that CoCR-RAG significantly outperforms existing context-reconstruction methods across these Web Q&A benchmarks. Furthermore, CoCR-RAG shows robustness across various backbone LLMs, establishing itself as a flexible, plug-and-play component adaptable to different RAG frameworks.