TurnaboutLLM: A Deductive Reasoning Benchmark from Detective Games

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of evaluating large language models’ (LLMs) deductive reasoning capabilities—specifically their ability to detect logical inconsistencies between testimonies and evidence—in long-form, multi-threaded detective games (e.g., *Phoenix Wright: Ace Attorney*, *Danganronpa*). We introduce the first structured evaluation framework and benchmark dataset tailored to interactive detective scenarios. Methodologically, we formalize game mechanics as a multi-step contradiction detection task requiring narrative comprehension, logical validation, and long-range contextual reasoning; we further propose systematic prompting strategies and a performance attribution analysis paradigm. Experiments across 12 state-of-the-art LLMs reveal that existing chain-of-thought and extended reasoning techniques yield limited gains, with context length, reasoning depth, and answer space size emerging as critical bottlenecks. This work pioneers the translation of detective game mechanics into a rigorous, structured reasoning benchmark, establishing a novel paradigm for evaluating complex narrative reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
This paper introduces TurnaboutLLM, a novel framework and dataset for evaluating the deductive reasoning abilities of Large Language Models (LLMs) by leveraging the interactive gameplay of detective games Ace Attorney and Danganronpa. The framework tasks LLMs with identifying contradictions between testimonies and evidences within long narrative contexts, a challenging task due to the large answer space and diverse reasoning types presented by its questions. We evaluate twelve state-of-the-art LLMs on the dataset, hinting at limitations of popular strategies for enhancing deductive reasoning such as extensive thinking and Chain-of-Thought prompting. The results also suggest varying effects of context size, the number of reasoning step and answer space size on model performance. Overall, TurnaboutLLM presents a substantial challenge for LLMs' deductive reasoning abilities in complex, narrative-rich environments.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' deductive reasoning via detective games
Identifying contradictions in long narrative contexts
Assessing impact of context size on reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging detective games for deductive reasoning evaluation
Identifying contradictions in long narrative contexts
Evaluating LLMs with diverse reasoning types
🔎 Similar Papers
No similar papers found.
Y
Yuan Yuan
University of Pennsylvania
M
Muyu He
University of Pennsylvania
M
Muhammad Adil Shahid
Jiani Huang
Jiani Huang
The Hong Kong Polytechnic University
LLMRecommender System
Ziyang Li
Ziyang Li
Johns Hopkins University
Programming LanguagesMachine Learning
L
Li Zhang