SymbolicThought: Integrating Language Models and Symbolic Reasoning for Consistent and Interpretable Human Relationship Understanding

πŸ“… 2025-07-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Narrative character relation understanding faces two key bottlenecks: high cost and limited coverage of manual annotation, and hallucination and logical inconsistency in large language models (LLMs). Method: We propose a human-in-the-loop framework integrating LLM-based information extraction, symbolic logical reasoning, and interactive graph editing, enforced by seven formally specified logical constraints to ensure inference consistency. Contribution/Results: We introduce the first annotated dataset comprising 160 fine-grained character relations with corresponding logical structures, enabling real-time graph validation and conflict resolution. Experiments demonstrate substantial improvements in relation identification accuracy and cross-instance consistency, alongside over 70% reduction in manual annotation effort. Our approach establishes a new paradigm for interpretable narrative understanding, socially aware AI, and robustness evaluation of LLMs, accompanied by a practical open-source toolkit.

Technology Category

Application Category

πŸ“ Abstract
Understanding character relationships is essential for interpreting complex narratives and conducting socially grounded AI research. However, manual annotation is time-consuming and low in coverage, while large language models (LLMs) often produce hallucinated or logically inconsistent outputs. We present SymbolicThought, a human-in-the-loop framework that combines LLM-based extraction with symbolic reasoning. The system constructs editable character relationship graphs, refines them using seven types of logical constraints, and enables real-time validation and conflict resolution through an interactive interface. To support logical supervision and explainable social analysis, we release a dataset of 160 interpersonal relationships with corresponding logical structures. Experiments show that SymbolicThought improves annotation accuracy and consistency while significantly reducing time cost, offering a practical tool for narrative understanding, explainable AI, and LLM evaluation.
Problem

Research questions and friction points this paper is trying to address.

Improving accuracy in understanding human relationships using AI
Reducing inconsistencies in large language model outputs
Combining symbolic reasoning with LLMs for better interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLM extraction with symbolic reasoning
Uses logical constraints for relationship refinement
Interactive interface for real-time validation