π€ AI Summary
Narrative character relation understanding faces two key bottlenecks: high cost and limited coverage of manual annotation, and hallucination and logical inconsistency in large language models (LLMs).
Method: We propose a human-in-the-loop framework integrating LLM-based information extraction, symbolic logical reasoning, and interactive graph editing, enforced by seven formally specified logical constraints to ensure inference consistency.
Contribution/Results: We introduce the first annotated dataset comprising 160 fine-grained character relations with corresponding logical structures, enabling real-time graph validation and conflict resolution. Experiments demonstrate substantial improvements in relation identification accuracy and cross-instance consistency, alongside over 70% reduction in manual annotation effort. Our approach establishes a new paradigm for interpretable narrative understanding, socially aware AI, and robustness evaluation of LLMs, accompanied by a practical open-source toolkit.
π Abstract
Understanding character relationships is essential for interpreting complex narratives and conducting socially grounded AI research. However, manual annotation is time-consuming and low in coverage, while large language models (LLMs) often produce hallucinated or logically inconsistent outputs. We present SymbolicThought, a human-in-the-loop framework that combines LLM-based extraction with symbolic reasoning. The system constructs editable character relationship graphs, refines them using seven types of logical constraints, and enables real-time validation and conflict resolution through an interactive interface. To support logical supervision and explainable social analysis, we release a dataset of 160 interpersonal relationships with corresponding logical structures. Experiments show that SymbolicThought improves annotation accuracy and consistency while significantly reducing time cost, offering a practical tool for narrative understanding, explainable AI, and LLM evaluation.