Improving LLM's Attachment to External Knowledge In Dialogue Generation Tasks Through Entity Anonymization

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the over-reliance of large language models (LLMs) on parametric internal knowledge—while neglecting retrieved external knowledge—in knowledge graph–based dialogue generation (KG-DG), this paper introduces an entity anonymization mechanism that compels the model to attend exclusively to structured information within the input knowledge graph, thereby improving knowledge consistency in responses. To quantify the degree of knowledge reliance, we propose the LLM-Knowledge Adherence Test (LLM-KAT), a novel, interpretable evaluation metric. Experiments on OpenDialKG demonstrate that our approach significantly enhances LLMs’ utilization of external knowledge, yielding substantial improvements in both knowledge fidelity and response relevance. The core contributions are twofold: (1) the first application of entity anonymization to KG-DG to decouple interference from internal parametric knowledge, and (2) the establishment of a principled, interpretable framework for assessing knowledge adherence in generative dialogue systems.

Technology Category

Application Category

📝 Abstract
Knowledge graph-based dialogue generation (KG-DG) is a challenging task requiring models to effectively incorporate external knowledge into conversational responses. While large language models (LLMs) have achieved impressive results across various NLP tasks, their ability to utilize external knowledge in KG-DG remains under-explored. We observe that LLMs often rely on internal knowledge, leading to detachment from provided knowledge graphs, even when they are given a flawlessly retrieved knowledge graph. First, we introduce LLM-KAT, an evaluation procedure for measuring knowledge attachment in generated responses. Second, we propose a simple yet effective entity anonymization technique to encourage LLMs to better leverage external knowledge. Experiments on the OpenDialKG dataset demonstrate that our approach improves LLMs' attachment on external knowledge.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM utilization of external knowledge in dialogue generation
Addressing detachment from knowledge graphs despite perfect retrieval
Enhancing knowledge attachment through entity anonymization techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entity anonymization technique enhances external knowledge usage
Evaluation procedure measures knowledge attachment in responses
Method improves LLM attachment on external knowledge graphs
🔎 Similar Papers
No similar papers found.
H
Hadi Sheikhi
Dept. of Computing Science, University of Alberta, Alberta Machine Intelligence Institute
Chenyang Huang
Chenyang Huang
Ph.D. Student, University of Alberta
MLDLNLPCV
O
Osmar R. Zaïane
Dept. of Computing Science, University of Alberta, Alberta Machine Intelligence Institute