🤖 AI Summary
This work addresses the identification of unreliable narrators in first-person texts who unintentionally distort information. We propose the first computationally grounded, narratology-driven classification framework. To support this, we introduce TUNa—a cross-domain, expert-annotated dataset—and define a fine-grained three-class unreliability classification task. For the first time, we formalize core narratological concepts as structured features and integrate them into large language models (LLaMA/GPT) via few-shot prompting, supervised fine-tuning, and curriculum learning. Experiments reveal the task’s high difficulty; models achieve stronger performance on literary texts while demonstrating promising generalization to real-world texts. We publicly release the TUNa dataset and implementation code to advance interdisciplinary research at the intersection of trustworthy AI and computational humanities.
📝 Abstract
Often when we interact with a first-person account of events, we consider whether or not the narrator, the primary speaker of the text, is reliable. In this paper, we propose using computational methods to identify unreliable narrators, i.e. those who unintentionally misrepresent information. Borrowing literary theory from narratology to define different types of unreliable narrators based on a variety of textual phenomena, we present TUNa, a human-annotated dataset of narratives from multiple domains, including blog posts, subreddit posts, hotel reviews, and works of literature. We define classification tasks for intra-narrational, inter-narrational, and inter-textual unreliabilities and analyze the performance of popular open-weight and proprietary LLMs for each. We propose learning from literature to perform unreliable narrator classification on real-world text data. To this end, we experiment with few-shot, fine-tuning, and curriculum learning settings. Our results show that this task is very challenging, and there is potential for using LLMs to identify unreliable narrators. We release our expert-annotated dataset and code and invite future research in this area.