DSVD: Dynamic Self-Verify Decoding for Faithful Generation in Large Language Models

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated or factually incorrect content. Existing mitigation strategies either overlook the model’s intrinsic self-correction capability or rely on costly post-hoc verification. To address this, we propose Dynamic Self-Validating Decoding (DSVD), a novel decoding framework that detects and rectifies hallucinations *during* generation. DSVD introduces the first multi-branch parallel self-validation architecture, integrating a lightweight token-level quality evaluator and a dynamic rollback mechanism—thereby embedding self-validation deeply into the decoding process itself. Crucially, DSVD is fully compatible with mainstream faithful decoding methods, requires no additional training, and operates without external verifiers. Evaluated across five benchmark datasets, DSVD significantly improves question-answering factual fidelity and FActScore accuracy, achieving a favorable trade-off between reliability and inference efficiency.

Technology Category

Application Category

📝 Abstract
The reliability of large language models remains a critical challenge, particularly due to their susceptibility to hallucinations and factual inaccuracies during text generation. Existing solutions either underutilize models' self-correction with preemptive strategies or use costly post-hoc verification. To further explore the potential of real-time self-verification and correction, we present Dynamic Self-Verify Decoding (DSVD), a novel decoding framework that enhances generation reliability through real-time hallucination detection and efficient error correction. DSVD integrates two key components: (1) parallel self-verification architecture for continuous quality assessment, (2) dynamic rollback mechanism for targeted error recovery. Extensive experiments across five benchmarks demonstrate DSVD's effectiveness, achieving significant improvement in truthfulness (Quesetion-Answering) and factual accuracy (FActScore). Results show the DSVD can be further incorporated with existing faithful decoding methods to achieve stronger performance. Our work establishes that real-time self-verification during generation offers a viable path toward more trustworthy language models without sacrificing practical deployability.
Problem

Research questions and friction points this paper is trying to address.

Enhances reliability in large language model text generation.
Addresses hallucinations and factual inaccuracies during generation.
Introduces real-time self-verification and error correction mechanisms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time hallucination detection and correction
Parallel self-verification for continuous quality assessment
Dynamic rollback mechanism for targeted error recovery
🔎 Similar Papers
No similar papers found.
Y
YiQiu Guo
Fudan University, Shanghai AI Laboratory, Shanghai JiaoTong University
Y
Yuchen Yang
Shanghai AI Laboratory, Shanghai JiaoTong University, University of Science and Technology of China
Z
Zhe Chen
Shanghai AI Laboratory, Shanghai JiaoTong University
Pingjie Wang
Pingjie Wang
Shanghai Jiao Tong University
Model CompressionInference Acceleration
Yusheng Liao
Yusheng Liao
Shanghai Jiao Tong University, Shanghai Artificial Intelligence Laboratory
Large Language ModelsClinical NLPAgentReasoning
Ya Zhang
Ya Zhang
Shanghai Jiao Tong University
Machine learningComputer visionMedical Imaging
Yanfeng Wang
Yanfeng Wang
Shanghai Jiao Tong University
Y
Yu Wang
Shanghai AI Laboratory, Shanghai JiaoTong University