🤖 AI Summary
To address insufficient reasoning depth and inefficient clarification in long-context, multi-hop question answering with large language models (LLMs), this paper proposes the Chain-of-Clarification (CoC) mechanism: LLMs autonomously generate targeted clarification questions and anchor them to critical context segments, enabling dynamic understanding refinement within a tree search framework. Integrated with a two-stage self-teaching paradigm—supervised fine-tuning (SFT) followed by direct preference optimization (DPO)—CoC ensures decomposable, optimizable, and single-pass-efficient reasoning. The method synergizes context-aware retrieval with self-reflective generation, markedly enhancing robustness in long-text comprehension. On NarrativeQA, CoC achieves a 97.8% answer recall rate and consistently outperforms state-of-the-art prompt engineering techniques and specialized long-context models across seven long-context benchmark tasks. Crucially, performance remains stable as input length increases.
📝 Abstract
Answering complex, long-context questions remains a major challenge for large language models (LLMs) as it requires effective question clarifications and context retrieval. We propose Agentic Long-Context Understanding (AgenticLU), a framework designed to enhance an LLM's understanding of such queries by integrating targeted self-clarification with contextual grounding within an agentic workflow. At the core of AgenticLU is Chain-of-Clarifications (CoC), where models refine their understanding through self-generated clarification questions and corresponding contextual groundings. By scaling inference as a tree search where each node represents a CoC step, we achieve 97.8% answer recall on NarrativeQA with a search depth of up to three and a branching factor of eight. To amortize the high cost of this search process to training, we leverage the preference pairs for each step obtained by the CoC workflow and perform two-stage model finetuning: (1) supervised finetuning to learn effective decomposition strategies, and (2) direct preference optimization to enhance reasoning quality. This enables AgenticLU models to generate clarifications and retrieve relevant context effectively and efficiently in a single inference pass. Extensive experiments across seven long-context tasks demonstrate that AgenticLU significantly outperforms state-of-the-art prompting methods and specialized long-context LLMs, achieving robust multi-hop reasoning while sustaining consistent performance as context length grows.