🤖 AI Summary
In knowledge-grounded dialogue, balancing factual consistency and response diversity remains challenging: over-reliance on external knowledge often leads to repetitive outputs, while stochastic decoding undermines factual accuracy. This paper proposes DoGe, a novel confidence-aware decoding framework that dynamically orchestrates internal parametric knowledge and external retrieved knowledge based on the model’s real-time factual confidence in the current generation segment—without fine-tuning or additional training. DoGe integrates internal knowledge modeling, external knowledge retrieval, and confidence-sensitive gating into a single controllable decoding process. Evaluated on three benchmark datasets, DoGe significantly outperforms existing decoding baselines: it improves factual accuracy (FActScore) by 8.7% and lexical diversity (Dist-2) by 12.3%, thereby achieving a principled unification of factual reliability and generative diversity.
📝 Abstract
Grounding external knowledge can enhance the factuality of responses in dialogue generation. However, excessive emphasis on it might result in the lack of engaging and diverse expressions. Through the introduction of randomness in sampling, current approaches can increase the diversity. Nevertheless, such sampling method could undermine the factuality in dialogue generation. In this study, to discover a solution for advancing creativity without relying on questionable randomness and to subtly reconcile the factuality and diversity within the source-grounded paradigm, a novel method named DoGe is proposed. DoGe can dynamically alternate between the utilization of internal parameter knowledge and external source knowledge based on the model's factual confidence. Extensive experiments on three widely-used datasets show that DoGe can not only enhance response diversity but also maintain factuality, and it significantly surpasses other various decoding strategy baselines.