🤖 AI Summary
Existing RAG research lacks a quantifiable, cross-domain evaluation framework for text chunking strategies, particularly overlooking systematic analysis of LLMs’ sensitivity to chunk structure.
Method: We propose HOPE—the first domain-agnostic, fully automated chunk quality assessment framework—modeling chunk quality along three dimensions: intra-chunk semantic independence, inter-chunk contextual coherence, and chunk-to-document consistency—without human annotation. HOPE introduces a novel three-tier evaluation taxonomy and employs LLM-based multi-dimensional semantic analysis with hierarchical metric aggregation.
Results: Validated across seven domains, HOPE exhibits strong correlation with RAG performance (p > 0.13), driving a 56.2% improvement in factual correctness and a 21.1% gain in answer correctness. It further reveals that semantic independence is more decisive than conceptual uniformity—challenging conventional chunking paradigms.
📝 Abstract
Document chunking fundamentally impacts Retrieval-Augmented Generation (RAG) by determining how source materials are segmented before indexing. Despite evidence that Large Language Models (LLMs) are sensitive to the layout and structure of retrieved data, there is currently no framework to analyze the impact of different chunking methods. In this paper, we introduce a novel methodology that defines essential characteristics of the chunking process at three levels: intrinsic passage properties, extrinsic passage properties, and passages-document coherence. We propose HOPE (Holistic Passage Evaluation), a domain-agnostic, automatic evaluation metric that quantifies and aggregates these characteristics. Our empirical evaluations across seven domains demonstrate that the HOPE metric correlates significantly (p>0.13) with various RAG performance indicators, revealing contrasts between the importance of extrinsic and intrinsic properties of passages. Semantic independence between passages proves essential for system performance with a performance gain of up to 56.2% in factual correctness and 21.1% in answer correctness. On the contrary, traditional assumptions about maintaining concept unity within passages show minimal impact. These findings provide actionable insights for optimizing chunking strategies, thus improving RAG system design to produce more factually correct responses.