Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating the deep language understanding capabilities of large language models (LLMs), particularly in long-range narrative reasoning—encompassing narrative coherence, entity tracking, commonsense inference, and theory of mind—remains an open challenge. Method: We introduce plot hole detection as a novel benchmark and propose FlawedFictionsMaker, a controllable synthetic algorithm, to generate high-quality, human-verified, contamination-resistant narratives. We further design a multi-turn LLM reasoning evaluation protocol and a systematic flaw injection analysis framework. Contribution/Results: Experiments reveal that state-of-the-art LLMs underperform humans significantly on this benchmark, with performance deteriorating sharply as story length increases. Moreover, LLM-generated and summarized narratives increase plot hole rates by over 50% and 100%, respectively. This work is the first to systematically expose fundamental limitations of LLMs in long-context narrative reasoning, establishing a new paradigm and reliable toolkit for assessing and advancing narrative intelligence.

Technology Category

Application Category

📝 Abstract
Stories are a fundamental aspect of human experience. Engaging deeply with stories and spotting plot holes -- inconsistencies in a storyline that break the internal logic or rules of a story's world -- requires nuanced reasoning skills, including tracking entities and events and their interplay, abstract thinking, pragmatic narrative understanding, commonsense and social reasoning, and theory of mind. As Large Language Models (LLMs) increasingly generate, interpret, and modify text, rigorously assessing their narrative consistency and deeper language understanding becomes critical. However, existing benchmarks focus mainly on surface-level comprehension. In this work, we propose plot hole detection in stories as a proxy to evaluate language understanding and reasoning in LLMs. We introduce FlawedFictionsMaker, a novel algorithm to controllably and carefully synthesize plot holes in human-written stories. Using this algorithm, we construct a benchmark to evaluate LLMs' plot hole detection abilities in stories -- FlawedFictions -- , which is robust to contamination, with human filtering ensuring high quality. We find that state-of-the-art LLMs struggle in accurately solving FlawedFictions regardless of the reasoning effort allowed, with performance significantly degrading as story length increases. Finally, we show that LLM-based story summarization and story generation are prone to introducing plot holes, with more than 50% and 100% increases in plot hole detection rates with respect to human-written originals.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' narrative consistency via plot hole detection
Assessing deep language understanding beyond surface-level comprehension
Measuring LLMs' reasoning skills in tracking story logic
Innovation

Methods, ideas, or system contributions that make the work stand out.

FlawedFictionsMaker algorithm synthesizes plot holes
Benchmark evaluates LLMs' plot hole detection
LLMs struggle with long story inconsistencies
🔎 Similar Papers
No similar papers found.