Do not be greedy, Think Twice: Sampling and Selection for Document-level Information Extraction

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of greedy decoding in document-level information extraction (DocIE), which constrains both the diversity and accuracy of large language model outputs. To overcome this, the authors propose ThinkTwice, a framework that generates multiple candidate outputs through sampling and selects the optimal one using two mechanisms: unsupervised consistency-based selection and supervised reward-model-based selection. Additionally, they introduce a rejection sampling strategy that leverages reasoning trajectories to construct silver-labeled training data. Experimental results demonstrate that ThinkTwice significantly outperforms greedy decoding and state-of-the-art baselines across multiple DocIE benchmarks, validating the effectiveness of the “sample-then-select” paradigm in enhancing both performance and robustness.

Technology Category

Application Category

📝 Abstract
Document-level Information Extraction (DocIE) aims to produce an output template with the entities and relations of interest occurring in the given document. Standard practices include prompting decoder-only LLMs using greedy decoding to avoid output variability. Rather than treating this variability as a limitation, we show that sampling can produce substantially better solutions than greedy decoding, especially when using reasoning models. We thus propose ThinkTwice, a sampling and selection framework in which the LLM generates multiple candidate templates for a given document, and a selection module chooses the most suitable one. We introduce both an unsupervised method that exploits agreement across generated outputs, and a supervised selection method using reward models trained on labeled DocIE data. To address the scarcity of golden reasoning trajectories for DocIE, we propose a rejection-sampling-based method to generate silver training data that pairs output templates with reasoning traces. Our experiments show the validity of unsupervised and supervised ThinkTwice, consistently outperforming greedy baselines and the state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Document-level Information Extraction
greedy decoding
output variability
LLM decoding
information extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

sampling
selection
document-level information extraction
reasoning trajectories
reward modeling
🔎 Similar Papers
No similar papers found.
M
Mikel Zubillaga
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
Oscar Sainz
Oscar Sainz
University of the Basque Country (UPV/EHU)
Computer ScienceArtificial InteligenceNatural Language ProcessingInformation Extraction
O
Oier López de Lacalle
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
E
Eneko Agirre
HiTZ Center - Ixa, University of the Basque Country UPV/EHU