KRISTEVA: Close Reading as a Novel Task for Benchmarking Interpretive Reasoning

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of standardized evaluation metrics for literary close reading—the ability to perform nuanced, interpretive textual analysis—in large language models (LLMs). We introduce KRISTEVA, the first benchmark explicitly designed for interpretive reasoning via close reading. Comprising 1,331 multiple-choice questions derived from authentic classroom materials, KRISTEVA features three progressively complex task categories: stylistic feature extraction, contextual knowledge retrieval, and style–context multi-hop reasoning. Crucially, it formalizes the humanities’ core competency of “close reading” as a quantifiable, task-based assessment. Empirical evaluation across 11 subtasks reveals that current state-of-the-art LLMs underperform human experts on 10 tasks (accuracy: 49.7%–69.7%), exposing fundamental limitations in literary interpretive reasoning. KRISTEVA thus fills a critical gap in evaluating deep textual understanding and literary interpretation—capabilities previously unaddressed by existing NLP benchmarks.

Technology Category

Application Category

📝 Abstract
Each year, tens of millions of essays are written and graded in college-level English courses. Students are asked to analyze literary and cultural texts through a process known as close reading, in which they gather textual details to formulate evidence-based arguments. Despite being viewed as a basis for critical thinking and widely adopted as a required element of university coursework, close reading has never been evaluated on large language models (LLMs), and multi-discipline benchmarks like MMLU do not include literature as a subject. To fill this gap, we present KRISTEVA, the first close reading benchmark for evaluating interpretive reasoning, consisting of 1331 multiple-choice questions adapted from classroom data. With KRISTEVA, we propose three progressively more difficult sets of tasks to approximate different elements of the close reading process, which we use to test how well LLMs may seem to understand and reason about literary works: 1) extracting stylistic features, 2) retrieving relevant contextual information from parametric knowledge, and 3) multi-hop reasoning between style and external contexts. Our baseline results find that, while state-of-the-art LLMs possess some college-level close reading competency (accuracy 49.7% - 69.7%), their performances still trail those of experienced human evaluators on 10 out of our 11 tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating large language models on close reading tasks
Assessing interpretive reasoning in literary analysis
Benchmarking LLMs against human performance in literature
Innovation

Methods, ideas, or system contributions that make the work stand out.

First close reading benchmark for interpretive reasoning
Three progressively difficult task sets for evaluation
Tests LLMs on style, context, and multi-hop reasoning
🔎 Similar Papers
No similar papers found.