🤖 AI Summary
Large language models (LLMs) suffer from hallucination in question answering due to conflicts between parametric knowledge and input context. Method: This paper proposes a novel paradigm that explicitly models entity–context alignment relationships. It is the first to systematically reveal the critical impact of entity–context alignment quality during training on inference faithfulness, and introduces an intervenable knowledge conflict mitigation mechanism—incorporating entity-aware attention constraints and context-alignment regularization loss into a seq-to-seq framework. Contribution/Results: The method significantly reduces hallucination rates across multiple document-level QA benchmarks (average reduction of 32.7%), while simultaneously improving answer faithfulness and context utilization. It establishes an interpretable and controllable technical pathway for trustworthy generative QA, enabling explicit intervention in knowledge grounding and alignment.
📝 Abstract
In the context of knowledge-driven seq-to-seq generation tasks, such as document-based question answering and document summarization systems, two fundamental knowledge sources play crucial roles: the inherent knowledge embedded within model parameters and the external knowledge obtained through context. Recent studies revealed a significant challenge: when there exists a misalignment between the model's inherent knowledge and the ground truth answers in training data, the system may exhibit problematic behaviors during inference, such as ignoring input context, or generating unfaithful content. Our investigation proposes a strategy to minimize hallucination by building explicit connection between source inputs and generated outputs. We specifically target a common hallucination pattern in question answering, examining how the correspondence between entities and their contexts during model training influences the system's performance at inference time.