๐ค AI Summary
RAG systems frequently suffer from hallucinations and factual inaccuracies due to retrieval of irrelevant or weakly relevant passages; existing approaches are limited to coarse-grained, document-level filtering, resulting in low precision. This paper proposes the first LLM-driven, fine-grained block-level filtering framework, integrating semantic chunking, query-block alignment scoring, and LLM-based re-ranking to enable precise noise suppression prior to generation. Its core innovations are: (1) dynamic relevance assessment and filtering at the semantic block levelโdeparting from conventional document-level paradigms; and (2) a lightweight LLM agent for efficient, interpretable block re-ranking and adaptive threshold-based pruning. Experiments demonstrate substantial hallucination reduction and consistent superiority over mainstream RAG baselines across multi-hop reasoning and fact-checking tasks, with an average 12.7% improvement in factual accuracy.
๐ Abstract
Retrieval-Augmented Generation (RAG) systems using large language models (LLMs) often generate inaccurate responses due to the retrieval of irrelevant or loosely related information. Existing methods, which operate at the document level, fail to effectively filter out such content. We propose LLM-driven chunk filtering, ChunkRAG, a framework that enhances RAG systems by evaluating and filtering retrieved information at the chunk level. Our approach employs semantic chunking to divide documents into coherent sections and utilizes LLM-based relevance scoring to assess each chunk's alignment with the user's query. By filtering out less pertinent chunks before the generation phase, we significantly reduce hallucinations and improve factual accuracy. Experiments show that our method outperforms existing RAG models, achieving higher accuracy on tasks requiring precise information retrieval. This advancement enhances the reliability of RAG systems, making them particularly beneficial for applications like fact-checking and multi-hop reasoning.