DocIE@XLLM25: In-Context Learning for Information Extraction using Fully Synthetic Demonstrations

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality annotated data is scarce for zero- and few-shot document-level joint entity and relation extraction. Method: This paper proposes the first fully automated, human-annotation-free framework for synthetic data generation and retrieval-augmented in-context learning. It leverages large language models (LLMs) to generate high-fidelity document-level synthetic samples—containing 59K entities and 30K relation triples—and constructs a synthetic dataset of over 5K Wikipedia abstracts. The framework innovatively integrates reasoning-optimized LLMs, dynamic similarity-based retrieval, and an embedding-driven demonstration library to enable retrieval-enhanced zero-shot joint extraction. Contribution/Results: Experiments on the DocIE shared task demonstrate substantial improvements in zero-shot performance on long documents, while also empirically validating the effectiveness—and exposing key limitations—of synthetic data for structured information extraction.

Technology Category

Application Category

📝 Abstract
Large, high-quality annotated corpora remain scarce in document-level entity and relation extraction in zero-shot or few-shot settings. In this paper, we present a fully automatic, LLM-based pipeline for synthetic data generation and in-context learning for document-level entity and relation extraction. In contrast to existing approaches that rely on manually annotated demonstrations or direct zero-shot inference, our method combines synthetic data generation with retrieval-based in-context learning, using a reasoning-optimized language model. This allows us to build a high-quality demonstration database without manual annotation and to dynamically retrieve relevant examples at inference time. Based on our approach we produce a synthetic dataset of over $5k$ Wikipedia abstracts with approximately $59k$ entities and $30k$ relation triples. Finally, we evaluate in-context learning performance on the DocIE shared task, extracting entities and relations from long documents in a zero-shot setting. We find that in-context joint entity and relation extraction at document-level remains a challenging task, even for state-of-the-art large language models.
Problem

Research questions and friction points this paper is trying to address.

Addresses scarcity of annotated corpora in document-level extraction
Proposes synthetic data generation for zero-shot relation extraction
Evaluates in-context learning performance on long documents
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based synthetic data generation pipeline
Retrieval-based in-context learning method
Reasoning-optimized language model usage