Explainable AI Components for Narrative Map Extraction

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Narrative extraction systems suffer from low user trust due to the inherent complexity of abstraction levels in narrative representation. Method: This paper proposes the first cross-level eXplainable AI (XAI) framework tailored for narrative map extraction, jointly generating interpretable outputs at three granularities: document-level (topic clustering), event-level (event relation modeling), and narrative-level (structural pattern induction). The framework integrates topic modeling, event graph reasoning, and narrative structure learning, augmented by a human-in-the-loop evaluation mechanism to empirically validate explanation fidelity. Results: Experiments on 2021 Cuban protest data demonstrate that multi-level explanations significantly enhance user trust and foster “calibrated trust”—a well-calibrated, context-aware confidence in AI outputs—thereby enabling robust, reliable human-AI collaboration in narrative analysis.

Technology Category

Application Category

📝 Abstract
As narrative extraction systems grow in complexity, establishing user trust through interpretable and explainable outputs becomes increasingly critical. This paper presents an evaluation of an Explainable Artificial Intelligence (XAI) system for narrative map extraction that provides meaningful explanations across multiple levels of abstraction. Our system integrates explanations based on topical clusters for low-level document relationships, connection explanations for event relationships, and high-level structure explanations for overall narrative patterns. In particular, we evaluate the XAI system through a user study involving 10 participants that examined narratives from the 2021 Cuban protests. The analysis of results demonstrates that participants using the explanations made the users trust in the system's decisions, with connection explanations and important event detection proving particularly effective at building user confidence. Survey responses indicate that the multi-level explanation approach helped users develop appropriate trust in the system's narrative extraction capabilities. This work advances the state-of-the-art in explainable narrative extraction while providing practical insights for developing reliable narrative extraction systems that support effective human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Evaluating XAI system for interpretable narrative map extraction
Assessing multi-level explanations to enhance user trust
Improving human-AI collaboration in narrative extraction systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level XAI for narrative extraction
Topical clusters for document relationships
Connection explanations for event relationships
🔎 Similar Papers
No similar papers found.
B
Brian Keith
Universidad Católica del Norte, Av. Angamos 0610, Antofagasta, 1270709, Chile
F
Fausto German
Virginia Tech, 620 Drillfield Drive, Blacksburg, VA, 24061, USA
E
Eric Krokos
U.S. Government, Washington, D.C. 20500, USA
S
Sarah Joseph
U.S. Government, Washington, D.C. 20500, USA
Chris North
Chris North
Computer Science, Virginia Tech
Visual AnalyticsInformation VisualizationHCILarge Displays