QChunker: Learning Question-Aware Text Chunking for Domain RAG via Multi-Agent Debate

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional RAG systems, where text chunking often suffers from insufficient semantic coherence and suboptimal information granularity, thereby degrading generation quality. The authors propose a question-driven multi-agent debate framework that formulates chunking as a joint task of segmentation and knowledge completion. The framework integrates a question outline generator, a text segmenter, a completeness reviewer, and a knowledge completer, augmented by document-outline-guided multi-path sampling and a small-model capability transfer mechanism. To directly evaluate chunk quality, they introduce a novel metric, ChunkScore, and release a high-quality dataset comprising 45K annotated chunks. Extensive experiments across four heterogeneous domains demonstrate that the proposed method significantly enhances the logical coherence and informational richness of generated chunks, exhibiting strong generalization capabilities.

Technology Category

Application Category

📝 Abstract
The effectiveness upper bound of retrieval-augmented generation (RAG) is fundamentally constrained by the semantic integrity and information granularity of text chunks in its knowledge base. To address these challenges, this paper proposes QChunker, which restructures the RAG paradigm from retrieval-augmentation to understanding-retrieval-augmentation. Firstly, QChunker models the text chunking as a composite task of text segmentation and knowledge completion to ensure the logical coherence and integrity of text chunks. Drawing inspiration from Hal Gregersen's "Questions Are the Answer" theory, we design a multi-agent debate framework comprising four specialized components: a question outline generator, text segmenter, integrity reviewer, and knowledge completer. This framework operates on the principle that questions serve as catalysts for profound insights. Through this pipeline, we successfully construct a high-quality dataset of 45K entries and transfer this capability to small language models. Additionally, to handle long evaluation chains and low efficiency in existing chunking evaluation methods, which overly rely on downstream QA tasks, we introduce a novel direct evaluation metric, ChunkScore. Both theoretical and experimental validations demonstrate that ChunkScore can directly and efficiently discriminate the quality of text chunks. Furthermore, during the text segmentation phase, we utilize document outlines for multi-path sampling to generate multiple candidate chunks and select the optimal solution employing ChunkScore. Extensive experimental results across four heterogeneous domains exhibit that QChunker effectively resolves aforementioned issues by providing RAG with more logically coherent and information-rich text chunks.
Problem

Research questions and friction points this paper is trying to address.

retrieval-augmented generation
text chunking
semantic integrity
information granularity
RAG
Innovation

Methods, ideas, or system contributions that make the work stand out.

question-aware chunking
multi-agent debate
retrieval-augmented generation
ChunkScore
knowledge completion
🔎 Similar Papers
No similar papers found.
J
Jihao Zhao
School of Information, Renmin University of China
D
Daixuan Li
School of Smart Governance, Renmin University of China
P
Pengfei Li
School of Information, Renmin University of China
S
Shuaishuai Zu
School of Information, Renmin University of China
B
Biao Qin
School of Information, BRAIN, Renmin University of China
Hongyan Liu
Hongyan Liu
Zhejiang University
programable networksnetwork measurementP4 language