Odysseus Navigates the Sirens' Song: Dynamic Focus Decoding for Factual and Diverse Open-Ended Text Generation

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a fundamental trade-off between factual accuracy and lexical diversity in open-ended text generation. To address this, we propose Dynamic Focusing Decoding (DFD), a plug-and-play decoding method requiring no additional training or external knowledge. DFD’s core innovation lies in adaptively modulating the decoding focus within a single generation step—based on inter-layer probability distribution discrepancies—to jointly optimize factuality and diversity. It supports modular integration of domain-specific knowledge and is fully compatible with standard sampling strategies such as top-k and nucleus (top-p) sampling. Evaluated across seven diverse benchmark datasets, DFD consistently improves both factual accuracy and generation diversity, with negligible computational overhead and seamless zero-shot deployment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly required to generate text that is both factually accurate and diverse across various open-ended applications. However, current stochastic decoding methods struggle to balance such objectives. We introduce Dynamic Focus Decoding (DFD), a novel plug-and-play stochastic approach that resolves this trade-off without requiring additional data, knowledge, or models. DFD adaptively adjusts the decoding focus based on distributional differences across layers, leveraging the modular and hierarchical nature of factual knowledge within LLMs. This dynamic adjustment improves factuality in knowledge-intensive decoding steps and promotes diversity in less knowledge-reliant steps. DFD can be easily integrated with existing decoding methods, enhancing both factuality and diversity with minimal computational overhead. Extensive experiments across seven datasets demonstrate that DFD significantly improves performance, providing a scalable and efficient solution for open-ended text generation.
Problem

Research questions and friction points this paper is trying to address.

Balancing factual accuracy and diversity in text generation.
Adaptive decoding focus based on layer distribution differences.
Enhancing text generation without additional data or models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Focus Decoding balances factuality and diversity
Adjusts decoding focus based on layer distribution differences
Enhances text generation without extra data or models
🔎 Similar Papers
No similar papers found.
Wen Luo
Wen Luo
Peking University
Feifan Song
Feifan Song
Peking University
Natural Language Processing
W
Wei Li
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Guangyue Peng
Guangyue Peng
Peking University
S
Shaohang Wei
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
H
Houfeng Wang
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University