SafePassage: High-Fidelity Information Extraction with Black Box LLMs

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Black-box large language models (LLMs) offer ease of deployment for information extraction (IE) but suffer from low reliability due to hallucination—generating outputs inconsistent with the source document. To address this, we propose SafePassage, a framework that enhances the fidelity of black-box LLM extractions by generating verifiable, document-grounded “safe passages.” Our method comprises three stages: (1) initial LLM-based extraction, (2) string-based global alignment and re-ranking to enforce strict textual consistency, and (3) lightweight fine-tuning of a compact encoder for scoring and final ranking. Crucially, SafePassage replaces end-to-end hallucination suppression with controllable generation and reveals that small fine-tuned encoders substantially outperform large models on passage-scoring tasks. Experiments show an 85% reduction in hallucination rate, high agreement with human judgments (Cohen’s κ = 0.91), and full training feasibility with only 1–2 hours of human annotation.

Technology Category

Application Category

📝 Abstract
Black box large language models (LLMs) make information extraction (IE) easy to configure, but hard to trust. Unlike traditional information extraction pipelines, the information "extracted" is not guaranteed to be grounded in the document. To prevent this, this paper introduces the notion of a "safe passage": context generated by the LLM that is both grounded in the document and consistent with the extracted information. This is operationalized via a three-step pipeline, SafePassage, which consists of: (1) an LLM extractor that generates structured entities and their contexts from a document, (2) a string-based global aligner, and (3) a scoring model. Results show that using these three parts in conjunction reduces hallucinations by up to 85% on information extraction tasks with minimal risk of flagging non-hallucinations. High agreement between the SafePassage pipeline and human judgments of extraction quality mean that the pipeline can be dually used to evaluate LLMs. Surprisingly, results also show that using a transformer encoder fine-tuned on a small number of task-specific examples can outperform an LLM scoring model at flagging unsafe passages. These annotations can be collected in as little as 1-2 hours.
Problem

Research questions and friction points this paper is trying to address.

Reducing hallucinations in black box LLM information extraction
Ensuring extracted information is grounded in source documents
Providing verifiable context for LLM-generated extractions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generating grounded safe passages for extraction
Aligning extracted entities with document strings
Scoring model flags hallucinations with fine-tuned encoder
🔎 Similar Papers
No similar papers found.
Joe Barrow
Joe Barrow
Pattern Data
Natural Language Processing
R
Raj Patel
Pattern Data
M
Misha Kharkovski
Pattern Data
B
Ben Davies
Pattern Data
R
Ryan Schmitt
Pattern Data