BOOKCOREF: Coreference Resolution at Book Scale

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing coreference resolution benchmarks (e.g., LitBank) feature limited document length, hindering evaluation of model performance on book-scale narratives (hundreds of thousands of words). To address this gap, we introduce BOOKCOREF—the first high-quality, whole-book narrative coreference benchmark, with documents averaging over 200K tokens. To enable large-scale, high-accuracy annotation, we design an automated pipeline integrating contextual matching, coreference chain consistency verification, and post-processing optimization—enabling, for the first time, end-to-end book-level coreference annotation. Experiments reveal that state-of-the-art models achieve up to +20 CoNLL-F1 gain on BOOKCOREF, yet still expose critical performance bottlenecks in ultra-long-context understanding. BOOKCOREF thus fills a fundamental gap in long-document coreference evaluation and advances research in long-range language modeling and discourse comprehension.

Technology Category

Application Category

📝 Abstract
Coreference Resolution systems are typically evaluated on benchmarks containing small- to medium-scale documents. When it comes to evaluating long texts, however, existing benchmarks, such as LitBank, remain limited in length and do not adequately assess system capabilities at the book scale, i.e., when co-referring mentions span hundreds of thousands of tokens. To fill this gap, we first put forward a novel automatic pipeline that produces high-quality Coreference Resolution annotations on full narrative texts. Then, we adopt this pipeline to create the first book-scale coreference benchmark, BOOKCOREF, with an average document length of more than 200,000 tokens. We carry out a series of experiments showing the robustness of our automatic procedure and demonstrating the value of our resource, which enables current long-document coreference systems to gain up to +20 CoNLL-F1 points when evaluated on full books. Moreover, we report on the new challenges introduced by this unprecedented book-scale setting, highlighting that current models fail to deliver the same performance they achieve on smaller documents. We release our data and code to encourage research and development of new book-scale Coreference Resolution systems at https://github.com/sapienzanlp/bookcoref.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Coreference Resolution on long texts is inadequate
Creating a book-scale coreference benchmark is necessary
Current models underperform on book-length documents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic pipeline for book-scale coreference annotations
First book-scale coreference benchmark BOOKCOREF
Enables +20 CoNLL-F1 gain on full books
🔎 Similar Papers
No similar papers found.