Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop Analysis

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the root causes of hallucination in reasoning models for multi-hop question answering. To address the limitation of conventional accuracy-based evaluation—which obscures fine-grained failure modes—we propose the first three-dimensional error analysis framework, characterizing failures along dimensions of hop diversity, information coverage completeness, and cognitive efficiency (i.e., “over-reasoning”). Our methodology integrates human-curated fine-grained annotations with automated metrics to enable joint qualitative and quantitative analysis of reasoning paths. Empirical findings reveal two pervasive deficiencies across state-of-the-art models: cross-hop information omission and inefficient, redundant reasoning steps. These results uncover the cognitive origins of hallucination in multi-step reasoning and establish a reusable, fine-grained evaluation paradigm. Moreover, the study provides empirically grounded pathways to enhance reasoning fidelity and transparency.

Technology Category

Application Category

📝 Abstract
The emergence of reasoning models and their integration into practical AI chat bots has led to breakthroughs in solving advanced math, deep search, and extractive question answering problems that requires a complex and multi-step thought process. Yet, a complete understanding of why these models hallucinate more than general purpose language models is missing. In this investigative study, we systematicallyexplore reasoning failures of contemporary language models on multi-hop question answering tasks. We introduce a novel, nuanced error categorization framework that examines failures across three critical dimensions: the diversity and uniqueness of source documents involved ("hops"), completeness in capturing relevant information ("coverage"), and cognitive inefficiency ("overthinking"). Through rigorous hu-man annotation, supported by complementary automated metrics, our exploration uncovers intricate error patterns often hidden by accuracy-centric evaluations. This investigative approach provides deeper insights into the cognitive limitations of current models and offers actionable guidance toward enhancing reasoning fidelity, transparency, and robustness in future language modeling efforts.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing reasoning failures in multi-hop question answering tasks
Understanding why models hallucinate more than general language models
Exploring cognitive limitations in current reasoning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel error categorization framework for reasoning failures
Examines hops, coverage, and overthinking dimensions
Combines human annotation with automated metrics
🔎 Similar Papers
No similar papers found.
A
Anushka Yadav
University of Massachusetts, Amherst
I
Isha Nalawade
University of Massachusetts, Amherst
S
Srujana Pillarichety
University of Massachusetts, Amherst
Y
Yashwanth Babu
University of Massachusetts, Amherst
R
Reshmi Ghosh
Microsoft
Samyadeep Basu
Samyadeep Basu
Research Scientist at Adobe Research | Prev: UMD, MSR
Machine LearningInfluence FunctionsInterpretabilityFew-shot learning
Wenlong Zhao
Wenlong Zhao
University of Massachusetts Amherst
Machine LearningNatural Language ProcessingComputational Sustainability
A
Ali Nasaeh
University of Massachusetts, Amherst
S
Sriram Balasubramaniam
University of Maryland, College Park
Soundararajan Srinivasan
Soundararajan Srinivasan
Microsoft Corporation
AIMLData Science