🤖 AI Summary
Existing coreference resolution benchmarks (e.g., LitBank) feature limited document length, hindering evaluation of model performance on book-scale narratives (hundreds of thousands of words). To address this gap, we introduce BOOKCOREF—the first high-quality, whole-book narrative coreference benchmark, with documents averaging over 200K tokens. To enable large-scale, high-accuracy annotation, we design an automated pipeline integrating contextual matching, coreference chain consistency verification, and post-processing optimization—enabling, for the first time, end-to-end book-level coreference annotation. Experiments reveal that state-of-the-art models achieve up to +20 CoNLL-F1 gain on BOOKCOREF, yet still expose critical performance bottlenecks in ultra-long-context understanding. BOOKCOREF thus fills a fundamental gap in long-document coreference evaluation and advances research in long-range language modeling and discourse comprehension.
📝 Abstract
Coreference Resolution systems are typically evaluated on benchmarks containing small- to medium-scale documents. When it comes to evaluating long texts, however, existing benchmarks, such as LitBank, remain limited in length and do not adequately assess system capabilities at the book scale, i.e., when co-referring mentions span hundreds of thousands of tokens. To fill this gap, we first put forward a novel automatic pipeline that produces high-quality Coreference Resolution annotations on full narrative texts. Then, we adopt this pipeline to create the first book-scale coreference benchmark, BOOKCOREF, with an average document length of more than 200,000 tokens. We carry out a series of experiments showing the robustness of our automatic procedure and demonstrating the value of our resource, which enables current long-document coreference systems to gain up to +20 CoNLL-F1 points when evaluated on full books. Moreover, we report on the new challenges introduced by this unprecedented book-scale setting, highlighting that current models fail to deliver the same performance they achieve on smaller documents. We release our data and code to encourage research and development of new book-scale Coreference Resolution systems at https://github.com/sapienzanlp/bookcoref.