🤖 AI Summary
This work addresses the limitations of greedy decoding in document-level information extraction (DocIE), which constrains both the diversity and accuracy of large language model outputs. To overcome this, the authors propose ThinkTwice, a framework that generates multiple candidate outputs through sampling and selects the optimal one using two mechanisms: unsupervised consistency-based selection and supervised reward-model-based selection. Additionally, they introduce a rejection sampling strategy that leverages reasoning trajectories to construct silver-labeled training data. Experimental results demonstrate that ThinkTwice significantly outperforms greedy decoding and state-of-the-art baselines across multiple DocIE benchmarks, validating the effectiveness of the “sample-then-select” paradigm in enhancing both performance and robustness.
📝 Abstract
Document-level Information Extraction (DocIE) aims to produce an output template with the entities and relations of interest occurring in the given document. Standard practices include prompting decoder-only LLMs using greedy decoding to avoid output variability. Rather than treating this variability as a limitation, we show that sampling can produce substantially better solutions than greedy decoding, especially when using reasoning models. We thus propose ThinkTwice, a sampling and selection framework in which the LLM generates multiple candidate templates for a given document, and a selection module chooses the most suitable one. We introduce both an unsupervised method that exploits agreement across generated outputs, and a supervised selection method using reward models trained on labeled DocIE data. To address the scarcity of golden reasoning trajectories for DocIE, we propose a rejection-sampling-based method to generate silver training data that pairs output templates with reasoning traces. Our experiments show the validity of unsupervised and supervised ThinkTwice, consistently outperforming greedy baselines and the state-of-the-art.