🤖 AI Summary
This study addresses multilingual coreference resolution, aiming to improve model recognition of cross-lingual coreferential expressions. Building on CorefUD 1.1 (12 languages, 17 datasets), we propose an end-to-end neural framework featuring: (i) a novel “heads-only” headword modeling strategy coupled with collaborative singleton detection to enhance referential structure awareness; (ii) syntactic enhancement via dependency parsing features and a Span2Head architecture; (iii) an overlapping sliding-window segmentation scheme for effective long-document modeling; and (iv) a dynamic singleton classification head with systematic evaluation of zero-shot cross-lingual transfer. Experiments demonstrate that our model significantly outperforms the best contemporaneous CRAC 2023 system of comparable scale on CorefUD 1.1, achieving substantial gains in average multilingual CoNLL-F1. Notably, it exhibits strong generalization and robustness on low-resource languages.
📝 Abstract
Coreference resolution, the task of identifying expressions in text that refer to the same entity, is a critical component in various natural language processing applications. This paper presents a novel end-to-end neural coreference resolution system utilizing the CorefUD 1.1 dataset, which spans 17 datasets across 12 languages. The proposed model is based on the standard end-to-end neural coreference resolution system. We first establish baseline models, including monolingual and cross-lingual variations, and then propose several extensions to enhance performance across diverse linguistic contexts. These extensions include cross-lingual training, incorporation of syntactic information, a Span2Head model for optimized headword prediction, and advanced singleton modeling. We also experiment with headword span representation and long-documents modeling through overlapping segments. The proposed extensions, particularly the heads-only approach, singleton modeling, and long document prediction, significantly improve performance across most datasets. We also perform zero-shot cross-lingual experiments, highlighting the potential and limitations of cross-lingual transfer in coreference resolution. Our findings contribute to the development of robust and scalable coreference systems for multilingual coreference resolution. Finally, we evaluate our model on the CorefUD 1.1 test set and surpass the best model from the CRAC 2023 shared task of comparable size by a large margin.