🤖 AI Summary
Large language models (LLMs) face a fundamental trade-off between factual accuracy and lexical diversity in open-ended text generation. To address this, we propose Dynamic Focusing Decoding (DFD), a plug-and-play decoding method requiring no additional training or external knowledge. DFD’s core innovation lies in adaptively modulating the decoding focus within a single generation step—based on inter-layer probability distribution discrepancies—to jointly optimize factuality and diversity. It supports modular integration of domain-specific knowledge and is fully compatible with standard sampling strategies such as top-k and nucleus (top-p) sampling. Evaluated across seven diverse benchmark datasets, DFD consistently improves both factual accuracy and generation diversity, with negligible computational overhead and seamless zero-shot deployment.
📝 Abstract
Large Language Models (LLMs) are increasingly required to generate text that is both factually accurate and diverse across various open-ended applications. However, current stochastic decoding methods struggle to balance such objectives. We introduce Dynamic Focus Decoding (DFD), a novel plug-and-play stochastic approach that resolves this trade-off without requiring additional data, knowledge, or models. DFD adaptively adjusts the decoding focus based on distributional differences across layers, leveraging the modular and hierarchical nature of factual knowledge within LLMs. This dynamic adjustment improves factuality in knowledge-intensive decoding steps and promotes diversity in less knowledge-reliant steps. DFD can be easily integrated with existing decoding methods, enhancing both factuality and diversity with minimal computational overhead. Extensive experiments across seven datasets demonstrate that DFD significantly improves performance, providing a scalable and efficient solution for open-ended text generation.