🤖 AI Summary
Existing seq2seq coreference resolution models achieve strong performance but suffer from poor adaptability to incremental settings (e.g., dialogue), exhibiting limited flexibility and low inference efficiency. To address this, we propose an efficient, entity-aware incremental seq2seq approach: we introduce entity-level tokens to construct compressed input sequences and selectively mask redundant spans—demonstrating, for the first time, the feasibility of such token-level pruning for incremental coreference resolution. Furthermore, we integrate entity representation extraction with prefix recombination to enable efficient, streaming-compatible prefix compression. Experiments show that our method achieves only a 0.6-point CoNLL F1 drop versus the full-prefix baseline on OntoNotes while attaining a 1.8× compression ratio; on LitBank, it sets a new state-of-the-art. This work establishes a lightweight, real-time paradigm for incremental coreference resolution.
📝 Abstract
Seq2seq coreference models have introduced a new paradigm for coreference resolution by learning to generate text corresponding to coreference labels, without requiring task-specific parameters. While these models achieve new state-of-the-art performance, they do so at the cost of flexibility and efficiency. In particular, they do not efficiently handle incremental settings such as dialogue, where text must processed sequentially. We propose a compressed representation in order to improve the efficiency of these methods in incremental settings. Our method works by extracting and re-organizing entity-level tokens, and discarding the majority of other input tokens. On OntoNotes, our best model achieves just 0.6 CoNLL F1 points below a full-prefix, incremental baseline while achieving a compression ratio of 1.8. On LitBank, where singleton mentions are annotated, it passes state-of-the-art performance. Our results indicate that discarding a wide portion of tokens in seq2seq resolvers is a feasible strategy for incremental coreference resolution.