🤖 AI Summary
To address the over-reliance of large language models (LLMs) on parametric internal knowledge—while neglecting retrieved external knowledge—in knowledge graph–based dialogue generation (KG-DG), this paper introduces an entity anonymization mechanism that compels the model to attend exclusively to structured information within the input knowledge graph, thereby improving knowledge consistency in responses. To quantify the degree of knowledge reliance, we propose the LLM-Knowledge Adherence Test (LLM-KAT), a novel, interpretable evaluation metric. Experiments on OpenDialKG demonstrate that our approach significantly enhances LLMs’ utilization of external knowledge, yielding substantial improvements in both knowledge fidelity and response relevance. The core contributions are twofold: (1) the first application of entity anonymization to KG-DG to decouple interference from internal parametric knowledge, and (2) the establishment of a principled, interpretable framework for assessing knowledge adherence in generative dialogue systems.
📝 Abstract
Knowledge graph-based dialogue generation (KG-DG) is a challenging task requiring models to effectively incorporate external knowledge into conversational responses. While large language models (LLMs) have achieved impressive results across various NLP tasks, their ability to utilize external knowledge in KG-DG remains under-explored. We observe that LLMs often rely on internal knowledge, leading to detachment from provided knowledge graphs, even when they are given a flawlessly retrieved knowledge graph. First, we introduce LLM-KAT, an evaluation procedure for measuring knowledge attachment in generated responses. Second, we propose a simple yet effective entity anonymization technique to encourage LLMs to better leverage external knowledge. Experiments on the OpenDialKG dataset demonstrate that our approach improves LLMs' attachment on external knowledge.