🤖 AI Summary
Real-world multi-hop question answering (MHQA) suffers from pervasive ambiguity in reasoning paths, necessitating parallel exploration of multiple sub-paths for a single query; existing large language models (LLMs) frequently commit to incorrect paths or produce incomplete answers. Method: We systematically define three levels of multi-hop ambiguity—syntactic, general, and semantic—and introduce MIRAGE, the first dedicated benchmark for evaluating ambiguity-aware MHQA. We propose CLARION, a multi-agent collaborative disambiguation framework that dynamically identifies ambiguity, decomposes queries into independent sub-questions, and performs cross-verification across multiple LLMs via coordinated prompting and agent orchestration. Contribution/Results: Experiments show state-of-the-art models achieve only 42.1% accuracy on MIRAGE, while CLARION attains 76.3%, establishing a new interpretable and verifiable paradigm for high-ambiguity, multi-step reasoning.
📝 Abstract
Real-world Multi-hop Question Answering (QA) often involves ambiguity that is inseparable from the reasoning process itself. This ambiguity creates a distinct challenge, where multiple reasoning paths emerge from a single question, each requiring independent resolution. Since each sub-question is ambiguous, the model must resolve ambiguity at every step. Thus, answering a single question requires handling multiple layers of ambiguity throughout the reasoning chain. We find that current Large Language Models (LLMs) struggle in this setting, typically exploring wrong reasoning paths and producing incomplete answers. To facilitate research on multi-hop ambiguity, we introduce MultI-hop Reasoning with AmbiGuity Evaluation for Illusory Questions (MIRAGE), a benchmark designed to analyze and evaluate this challenging intersection of ambiguity interpretation and multi-hop reasoning. MIRAGE contains 1,142 high-quality examples of ambiguous multi-hop questions, categorized under a taxonomy of syntactic, general, and semantic ambiguity, and curated through a rigorous multi-LLM verification pipeline. Our experiments reveal that even state-of-the-art models struggle on MIRAGE, confirming that resolving ambiguity combined with multi-step inference is a distinct and significant challenge. To establish a robust baseline, we propose CLarifying Ambiguity with a Reasoning and InstructiON (CLARION), a multi-agent framework that significantly outperforms existing approaches on MIRAGE, paving the way for more adaptive and robust reasoning systems.