🤖 AI Summary
This work addresses the challenge that large language models often fail to maintain logical dependencies between foreshadowing and payoff in long-form narrative generation, leading to inconsistent story worlds. To remedy this, the paper formalizes the foreshadowing–trigger–payoff mechanism as a supervised, structured causal predicate and introduces an explicit causal framework to guide logically coherent storytelling. Leveraging the BookSum corpus, the authors automatically extract foreshadowing–trigger–payoff triplets and integrate them as structured supervision signals for controllable generation. Experimental results demonstrate that the proposed approach significantly outperforms standard prompting strategies in both foreshadowing resolution accuracy and overall narrative consistency, marking a shift from surface-level fluency toward deeper narrative reasoning capabilities.
📝 Abstract
Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes. However, despite advances in story generation, large language models (LLMs) frequently fail to bridge these long-range narrative dependencies, often leaving"Chekhov's guns"unfired even when the necessary context is present. Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups. In this paper, we introduce Codified Foreshadowing-Payoff Generation (CFPG), a novel framework that reframes narrative quality through the lens of payoff realization. Recognizing that LLMs struggle to intuitively grasp the"triggering mechanism"of a foreshadowed event, CFPG transforms narrative continuity into a set of executable causal predicates. By mining and encoding Foreshadow-Trigger-Payoff triples from the BookSum corpus, we provide structured supervision that ensures foreshadowed commitments are not only mentioned but also temporally and logically fulfilled. Experiments demonstrate that CFPG significantly outperforms standard prompting baselines in payoff accuracy and narrative alignment. Our findings suggest that explicitly codifying narrative mechanics is essential for moving LLMs from surface-level fluency to genuine narrative competence.