Enhancing Semantic Consistency of Large Language Models through Model Editing: An Interpretability-Oriented Approach

📅 2025-01-19
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit inconsistent outputs across semantically equivalent yet lexically distinct prompts, and conventional consistency optimization approaches rely on costly large-scale fine-tuning, lacking both computational efficiency and interpretability. To address this, we propose the first interpretable model editing method explicitly designed for semantic consistency: it identifies critical attention heads via attention attribution, then injects lightweight, directional biases aligned with semantic equivalence—requiring neither additional training data nor full-parameter fine-tuning. Our approach ensures mechanistic transparency and high efficiency, achieving an average 18.7% improvement in semantic consistency across diverse NLU and NLG benchmarks. Notably, it simultaneously enhances downstream task performance and generalizes to unseen tasks, with editing costs amounting to less than 0.5% of those incurred by standard fine-tuning.

Technology Category

Application Category

📝 Abstract
A Large Language Model (LLM) tends to generate inconsistent and sometimes contradictory outputs when presented with a prompt that has equivalent semantics but is expressed differently from the original prompt. To achieve semantic consistency of an LLM, one of the key approaches is to finetune the model with prompt-output pairs with semantically equivalent meanings. Despite its effectiveness, a data-driven finetuning method incurs substantial computation costs in data preparation and model optimization. In this regime, an LLM is treated as a ``black box'', restricting our ability to gain deeper insights into its internal mechanism. In this paper, we are motivated to enhance the semantic consistency of LLMs through a more interpretable method (i.e., model editing) to this end. We first identify the model components (i.e., attention heads) that have a key impact on the semantic consistency of an LLM. We subsequently inject biases into the output of these model components along the semantic-consistency activation direction. It is noteworthy that these modifications are cost-effective, without reliance on mass manipulations of the original model parameters. Through comprehensive experiments on the constructed NLU and open-source NLG datasets, our method demonstrates significant improvements in the semantic consistency and task performance of LLMs. Additionally, our method exhibits promising generalization capabilities by performing well on tasks beyond the primary tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Inconsistency Issues
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Resource-efficient Editing
Consistency Enhancement
Attention Heads Modification
🔎 Similar Papers
No similar papers found.
J
Jingyuan Yang
IT Innovation and Research Center, Huawei Technologies; College of Intelligence and Computing, Tianjin University
Dapeng Chen
Dapeng Chen
Huawei
Computer VisionMachine Learning
Y
Yajing Sun
IT Innovation and Research Center, Huawei Technologies
R
Rong-Zhi Li
IT Innovation and Research Center, Huawei Technologies
Z
Zhiyong Feng
College of Intelligence and Computing, Tianjin University
W
Wei Peng
IT Innovation and Research Center, Huawei Technologies