🤖 AI Summary
This work addresses the prevalent issue of factual hallucinations in large language models during text generation, a problem inadequately mitigated by existing approaches that either become overly conservative or rely heavily on external verification. To tackle this, the authors propose VeriFY, a framework that employs a self-consistency verification mechanism during training. VeriFY guides the model to generate answers, construct verification queries, and assess internal consistency, thereby learning to recognize its own uncertainty. The method incorporates structured verification trajectories and stage-wise loss masking to preserve supervision over verification behaviors while avoiding reinforcement of hallucinated content. Experimental results demonstrate that VeriFY reduces factual hallucination rates by 9.7%–53.3% across multiple model families and scales, with only a minor sacrifice in recall (0.4%–5.7%), and achieves cross-dataset generalization even under single-source training.
📝 Abstract
Factual hallucination remains a central challenge for large language models (LLMs). Existing mitigation approaches primarily rely on either external post-hoc verification or mapping uncertainty directly to abstention during fine-tuning, often resulting in overly conservative behavior. We propose VeriFY, a training-time framework that teaches LLMs to reason about factual uncertainty through consistency-based self-verification. VeriFY augments training with structured verification traces that guide the model to produce an initial answer, generate and answer a probing verification query, issue a consistency judgment, and then decide whether to answer or abstain. To address the risk of reinforcing hallucinated content when training on augmented traces, we introduce a stage-level loss masking approach that excludes hallucinated answer stages from the training objective while preserving supervision over verification behavior. Across multiple model families and scales, VeriFY reduces factual hallucination rates by 9.7 to 53.3 percent, with only modest reductions in recall (0.4 to 5.7 percent), and generalizes across datasets when trained on a single source. The source code, training data, and trained model checkpoints will be released upon acceptance.