🤖 AI Summary
This paper investigates the root causes of hallucination in reasoning models for multi-hop question answering. To address the limitation of conventional accuracy-based evaluation—which obscures fine-grained failure modes—we propose the first three-dimensional error analysis framework, characterizing failures along dimensions of hop diversity, information coverage completeness, and cognitive efficiency (i.e., “over-reasoning”). Our methodology integrates human-curated fine-grained annotations with automated metrics to enable joint qualitative and quantitative analysis of reasoning paths. Empirical findings reveal two pervasive deficiencies across state-of-the-art models: cross-hop information omission and inefficient, redundant reasoning steps. These results uncover the cognitive origins of hallucination in multi-step reasoning and establish a reusable, fine-grained evaluation paradigm. Moreover, the study provides empirically grounded pathways to enhance reasoning fidelity and transparency.
📝 Abstract
The emergence of reasoning models and their integration into practical AI chat bots has led to breakthroughs in solving advanced math, deep search, and extractive question answering problems that requires a complex and multi-step thought process. Yet, a complete understanding of why these models hallucinate more than general purpose language models is missing. In this investigative study, we systematicallyexplore reasoning failures of contemporary language models on multi-hop question answering tasks. We introduce a novel, nuanced error categorization framework that examines failures across three critical dimensions: the diversity and uniqueness of source documents involved ("hops"), completeness in capturing relevant information ("coverage"), and cognitive inefficiency ("overthinking"). Through rigorous hu-man annotation, supported by complementary automated metrics, our exploration uncovers intricate error patterns often hidden by accuracy-centric evaluations. This investigative approach provides deeper insights into the cognitive limitations of current models and offers actionable guidance toward enhancing reasoning fidelity, transparency, and robustness in future language modeling efforts.