🤖 AI Summary
Existing RAG evaluation benchmarks suffer from evidence sparsity, hindering effective assessment of document chunking quality. To address this, we propose HiChunk—a hierarchical chunking framework leveraging fine-tuned large language models for fine-grained structural parsing and introducing Auto-Merge, a novel retrieval algorithm that dynamically fuses multi-granularity chunks. Concurrently, we introduce HiCBench—the first dedicated benchmark supporting multi-level evaluation—constructed via expert-annotated chunking points and synthetically generated, evidence-dense QA pairs spanning paragraph-, sentence-, and phrase-level granularity. Experiments demonstrate that HiCBench precisely characterizes the impact of diverse chunking strategies across the entire RAG pipeline, while HiChunk achieves significant improvements in chunk coherence and end-to-end generation performance under reasonable computational overhead.
📝 Abstract
Retrieval-Augmented Generation (RAG) enhances the response capabilities of language models by integrating external knowledge sources. However, document chunking as an important part of RAG system often lacks effective evaluation tools. This paper first analyzes why existing RAG evaluation benchmarks are inadequate for assessing document chunking quality, specifically due to evidence sparsity. Based on this conclusion, we propose HiCBench, which includes manually annotated multi-level document chunking points, synthesized evidence-dense quetion answer(QA) pairs, and their corresponding evidence sources. Additionally, we introduce the HiChunk framework, a multi-level document structuring framework based on fine-tuned LLMs, combined with the Auto-Merge retrieval algorithm to improve retrieval quality. Experiments demonstrate that HiCBench effectively evaluates the impact of different chunking methods across the entire RAG pipeline. Moreover, HiChunk achieves better chunking quality within reasonable time consumption, thereby enhancing the overall performance of RAG systems.