Enhancing Structural Mapping with LLM-derived Abstractions for Analogical Reasoning in Narratives

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to narrative analogical reasoning struggle to effectively model structural similarity: structural mapping relies on pre-extracted entities, while large language models (LLMs) are susceptible to prompt formatting and surface-level similarities. This work proposes YARN, a modular framework that systematically integrates LLM-driven multi-granularity abstraction with structural mapping for the first time. YARN decomposes narratives into units and defines four abstraction levels grounded in narrative roles and semantics, enabling a dedicated mapping component to align cross-story elements. Experiments demonstrate that YARN matches or outperforms end-to-end LLM baselines across multiple narrative analogy tasks. Moreover, the framework facilitates controlled analysis of individual component contributions and reveals critical challenges such as optimal abstraction-level selection and implicit causal modeling.
📝 Abstract
Analogical reasoning is a key driver of human generalization in problem-solving and argumentation. Yet, analogies between narrative structures remain challenging for machines. Cognitive engines for structural mapping are not directly applicable, as they assume pre-extracted entities, whereas LLMs' performance is sensitive to prompt format and the degree of surface similarity between narratives. This gap motivates a key question: What is the impact of enhancing structural mapping with LLM-derived abstractions on their analogical reasoning ability in narratives? To that end, we propose a modular framework named YARN (Yielding Abstractions for Reasoning in Narratives), which uses LLMs to decompose narratives into units, abstract these units, and then passes them to a mapping component that aligns elements across stories to perform analogical reasoning. We define and operationalize four levels of abstraction that capture both the general meaning of units and their roles in the story, grounded in prior work on framing. Our experiments reveal that abstractions consistently improve model performance, resulting in competitive or better performance than end-to-end LLM baselines. Closer error analysis reveals the remaining challenges in abstraction at the right level, in incorporating implicit causality, and an emerging categorization of analogical patterns in narratives. YARN enables systematic variation of experimental settings to analyze component contributions, and to support future work, we make the code for YARN openly available.
Problem

Research questions and friction points this paper is trying to address.

analogical reasoning
narratives
structural mapping
abstraction
LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

analogical reasoning
structural mapping
LLM-derived abstractions
narrative understanding
YARN framework
🔎 Similar Papers
No similar papers found.