🤖 AI Summary
To address context faithfulness hallucinations in large language models—caused by insufficient context utilization and high output uncertainty—this paper proposes a lightweight, single-pass decoding framework with dynamic attention guidance. The method innovatively jointly models attention distributions and uncertainty signals; probe-based analysis empirically validates that attention strength correlates with context utilization, enabling dynamic decoding control. It integrates attention mechanism analysis, uncertainty estimation, context-aware decoding, and lightweight post-processing. Evaluated on multi-source question-answering benchmarks, the approach reduces hallucination rates by 27.3% on average, significantly improving output faithfulness and robustness while maintaining low computational overhead.
📝 Abstract
Large language models (LLMs) often suffer from context faithfulness hallucinations, where outputs deviate from retrieved information due to insufficient context utilization and high output uncertainty. Our uncertainty evaluation experiments reveal a strong correlation between high uncertainty and hallucinations. We hypothesize that attention mechanisms encode signals indicative of contextual utilization, validated through probing analysis. Based on these insights, we propose Dynamic Attention-Guided Context Decoding (DAGCD), a lightweight framework that integrates attention distributions and uncertainty signals in a single-pass decoding process. Experiments across QA datasets demonstrate DAGCD's effectiveness, achieving significant improvements in faithfulness and robustness while maintaining computational efficiency.