🤖 AI Summary
Traditional fixed-length and recursive chunking methods in retrieval-augmented generation (RAG) often disrupt semantic coherence, while the impact of semantic chunking on generation quality remains systematically unassessed. This paper introduces two domain-aware semantic chunking methods—Projected Similarity Chunking (PSC) and Metric Fusion Chunking (MFC)—and presents the first systematic investigation into how semantic chunking jointly affects retrieval accuracy and generation quality, including its cross-domain generalizability. We establish a multi-dimensional evaluation framework using PubMedQA and full-text PMC documents, integrating diverse embedding models. Experiments demonstrate that our methods achieve up to a 24× improvement in Mean Reciprocal Rank (MRR), significant gains in Hits@k, faster inference than mainstream chunking libraries, and superior generation quality across multiple benchmarks. The proposed approaches provide a reproducible, generalizable technical pathway for semantic chunking in RAG.
📝 Abstract
Document chunking is a crucial component of Retrieval-Augmented Generation (RAG), as it directly affects the retrieval of relevant and precise context. Conventional fixed-length and recursive splitters often produce arbitrary, incoherent segments that fail to preserve semantic structure. Although semantic chunking has gained traction, its influence on generation quality remains underexplored. This paper introduces two efficient semantic chunking methods, Projected Similarity Chunking (PSC) and Metric Fusion Chunking (MFC), trained on PubMed data using three different embedding models. We further present an evaluation framework that measures the effect of chunking on both retrieval and generation by augmenting PubMedQA with full-text PubMed Central articles. Our results show substantial retrieval improvements (24x with PSC) in MRR and higher Hits@k on PubMedQA. We provide a comprehensive analysis, including statistical significance and response-time comparisons with common chunking libraries. Despite being trained on a single domain, PSC and MFC also generalize well, achieving strong out-of-domain generation performance across multiple datasets. Overall, our findings confirm that our semantic chunkers, especially PSC, consistently deliver superior performance.