PrefixNLI: Detecting Factual Inconsistencies as Soon as They Arise

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of real-time detection of factual inconsistency in autoregressive text generation, this paper introduces prefix-level entailment detection—a natural language inference (NLI) task that dynamically assesses the logical relationship between incrementally generated prefixes and external evidence. We present MiniTruePrefixes, the first NLI dataset explicitly designed for generation processes, featuring both training and evaluation splits at the prefix level. We further propose a lightweight entailment model coupled with a controlled decoding framework to perform on-the-fly prefix–evidence entailment verification during decoding. Experiments show that MiniTruePrefixes achieves a 5–14 percentage-point improvement in prefix-level F1 over baselines. When integrated into LLaMA-3.2-3B, our method matches the factuality of an 8B model while accelerating inference by 37% and reducing GPU memory consumption by 50%. This work establishes the first fine-grained, low-overhead online monitoring mechanism for factual consistency throughout the generation process.

Technology Category

Application Category

📝 Abstract
Natural Language Inference (NLI) models have been used in various ways to improve the factuality of LLM outputs. This is typically done by applying an NLI model to judge whether the model output is entailed from the supposed evidence, triggering some corrective actions, such as beam reranking at inference time or RL rewards during training. While NLI models are trained to detect factual inconsistencies over complete sentences, decisions in the common autoregressive generation architecture are made for each evolving text prefix, during decoding. Addressing this setting, we generalize the entailment detection task to apply over arbitrary text prefixes, and suggest its utility for improving generation faithfulness. Providing suitable evaluation and training datasets for this task, we train MiniTruePrefixes, a novel specialized model that better detects factual inconsistencies over text prefixes, outperforming comparable baseline NLI models by 5-14 F1 points in prefix-level entailment. We further demonstrate that integrating MiniTruePrefixes into a controlled decoding framework substantially improves factual consistency in abstractive summarization. When guided by MiniTruePrefixes, LLaMA-3.2-3B-Instruct matches the faithfulness and runtime of the 8B model from the same model family, while using only half the memory.
Problem

Research questions and friction points this paper is trying to address.

Detecting factual inconsistencies in evolving text prefixes during generation
Generalizing entailment detection to improve faithfulness of autoregressive outputs
Developing specialized models for prefix-level factual consistency evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects factual inconsistencies in text prefixes
Trains specialized model for prefix-level entailment
Integrates model into controlled decoding framework
🔎 Similar Papers
No similar papers found.