🤖 AI Summary
This work addresses the challenge of evaluating large language models’ (LLMs) deductive reasoning capabilities—specifically their ability to detect logical inconsistencies between testimonies and evidence—in long-form, multi-threaded detective games (e.g., *Phoenix Wright: Ace Attorney*, *Danganronpa*). We introduce the first structured evaluation framework and benchmark dataset tailored to interactive detective scenarios. Methodologically, we formalize game mechanics as a multi-step contradiction detection task requiring narrative comprehension, logical validation, and long-range contextual reasoning; we further propose systematic prompting strategies and a performance attribution analysis paradigm. Experiments across 12 state-of-the-art LLMs reveal that existing chain-of-thought and extended reasoning techniques yield limited gains, with context length, reasoning depth, and answer space size emerging as critical bottlenecks. This work pioneers the translation of detective game mechanics into a rigorous, structured reasoning benchmark, establishing a novel paradigm for evaluating complex narrative reasoning in LLMs.
📝 Abstract
This paper introduces TurnaboutLLM, a novel framework and dataset for evaluating the deductive reasoning abilities of Large Language Models (LLMs) by leveraging the interactive gameplay of detective games Ace Attorney and Danganronpa. The framework tasks LLMs with identifying contradictions between testimonies and evidences within long narrative contexts, a challenging task due to the large answer space and diverse reasoning types presented by its questions. We evaluate twelve state-of-the-art LLMs on the dataset, hinting at limitations of popular strategies for enhancing deductive reasoning such as extensive thinking and Chain-of-Thought prompting. The results also suggest varying effects of context size, the number of reasoning step and answer space size on model performance. Overall, TurnaboutLLM presents a substantial challenge for LLMs' deductive reasoning abilities in complex, narrative-rich environments.