🤖 AI Summary
Limited context windows in large language models (LLMs) degrade long-text reasoning performance. Existing retrieval-augmented generation (RAG) and divide-and-conquer (DCF) approaches struggle to preserve logical coherence and model long-range dependencies simultaneously. This paper proposes DocTree, a document-structure-aware hierarchical MapReduce framework. DocTree first constructs a semantic document tree (DocTree) via hierarchical semantic parsing; it then recursively applies Map operations—where leaf nodes generate local reasoning chains—and Reduce operations—where internal nodes aggregate logically consistent conclusions from children. Crucially, DocTree explicitly incorporates document hierarchy into the reasoning process, mitigating logical conflicts and dependency fragmentation inherent in conventional DCF. Experiments on models ≥70B parameters demonstrate that DocTree significantly improves logical consistency and accuracy in long-text reasoning, outperforming state-of-the-art RAG and DCF methods. The implementation is publicly available.
📝 Abstract
Large Language Models (LLMs), constrained by limited context windows, often face significant performance degradation when reasoning over long contexts. To address this, Retrieval-Augmented Generation (RAG) retrieves and reasons over chunks but frequently sacrifices logical coherence due to its reliance on similarity-based rankings. Similarly, divide-and-conquer frameworks (DCF) split documents into small chunks for independent reasoning and aggregation. While effective for local reasoning, DCF struggles to capture long-range dependencies and risks inducing conflicts by processing chunks in isolation. To overcome these limitations, we propose ToM, a novel Tree-oriented MapReduce framework for long-context reasoning. ToM leverages the inherent hierarchical structure of long documents (e.g., main headings and subheadings) by constructing a DocTree through hierarchical semantic parsing and performing bottom-up aggregation. Using a Tree MapReduce approach, ToM enables recursive reasoning: in the Map step, rationales are generated at child nodes; in the Reduce step, these rationales are aggregated across sibling nodes to resolve conflicts or reach consensus at parent nodes. Experimental results on 70B+ LLMs show that ToM significantly outperforms existing divide-and-conquer frameworks and retrieval-augmented generation methods, achieving better logical coherence and long-context reasoning. Our code is available at https://github.com/gjn12-31/ToM .