🤖 AI Summary
This work addresses the limitation of existing Retrieval-Augmented Generation (RAG) approaches in handling academic papers, which often disregard their inherent hierarchical structure, leading to fragmented retrieval contexts, imprecise evidence localization, and increased reasoning burden. To overcome this, the authors propose PT-RAG, a novel framework that leverages the native paper hierarchy as a low-entropy retrieval prior. PT-RAG constructs a structure-preserving PaperTree index and introduces a path-guided retrieval mechanism that selects highly relevant root-to-leaf paths within a fixed token budget, yielding compact and coherent contexts. The study also introduces entropy-based structural diagnostic metrics to quantify retrieval quality. Experimental results on three academic question-answering benchmarks demonstrate that PT-RAG significantly reduces section entropy and evidence alignment cross-entropy while improving answer accuracy, thereby validating its superiority in contextual coherence and precise evidence localization.
📝 Abstract
Retrieval-augmented generation (RAG) is increasingly applied to question-answering over long academic papers, where accurate evidence allocation under a fixed token budget is critical. Existing approaches typically flatten academic papers into unstructured chunks during preprocessing, which destroys the native hierarchical structure. This loss forces retrieval to operate in a disordered space, thereby producing fragmented contexts, misallocating tokens to non-evidential regions under finite token budgets, and increasing the reasoning burden for downstream language models. To address these issues, we propose PT-RAG, an RAG framework that treats the native hierarchical structure of academic papers as a low-entropy retrieval prior. PT-RAG first inherits the native hierarchy to construct a structure-fidelity PaperTree index, which prevents entropy increase at the source. It then designs a path-guided retrieval mechanism that aligns query semantics to relevant sections and selects high relevance root-to-leaf paths under a fixed token budget, yielding compact, coherent, and low-entropy retrieval contexts. In contrast to existing RAG approaches, PT-RAG avoids entropy increase caused by destructive preprocessing and provides a native low-entropy structural basis for subsequent retrieval. To assess this design, we introduce entropy-based structural diagnostics that quantify retrieval fragmentation and evidence allocation accuracy. On three academic question-answering benchmarks, PT-RAG achieves consistently lower section entropy and evidence alignment cross entropy than strong baselines, indicating reduced context fragmentation and more precise allocation to evidential regions. These structural advantages directly translate into higher answer quality.