🤖 AI Summary
Document-level relation extraction (DocRE) requires modeling cross-sentence entity interactions, yet suffers significant performance degradation in zero-shot and few-shot settings. To address this, we propose Prompt-ATLOP: a lightweight, general-purpose DocRE framework that integrates ATLOP’s structured reasoning capability with prompt learning. It introduces a compact encoder and scalable entity–relation prompt templates, eliminating the need for fine-tuning large language models. With over 60% fewer parameters than ATLOP, it substantially alleviates data scarcity. On the Re-DocRED benchmark, Prompt-ATLOP achieves state-of-the-art results across 1-shot to 5-shot settings, improving F1 by 8.2–12.7 points over ATLOP. Moreover, it demonstrates strong zero-shot transferability to unseen relations. The code is publicly available.
📝 Abstract
Relation Extraction (RE) is a fundamental task in Natural Language Processing, and its document-level variant poses significant challenges, due to the need to model complex interactions between entities across sentences. Current approaches, largely based on the ATLOP architecture, are commonly evaluated on benchmarks like DocRED and Re-DocRED. However, their performance in zero-shot or few-shot settings remains largely underexplored due to the task's complexity. Recently, the GLiNER model has shown that a compact NER model can outperform much larger Large Language Models. With a similar motivation, we introduce GLiDRE, a new model for document-level relation extraction that builds on the key ideas of GliNER. We benchmark GLiDRE against state-of-the-art models across various data settings on the Re-DocRED dataset. Our results demonstrate that GLiDRE achieves state-of-the-art performance in few-shot scenarios. Our code is publicly available.