Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering

📅 2024-10-21
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often exhibit parameterized knowledge that conflicts with input context, leading to reliance on outdated or incorrect information in open-domain question answering. To address this, we propose SpARE—a training-free, inference-time representation editing method that, for the first time, localizes and intervenes upon knowledge conflict signals within intermediate-layer neural activations. SpARE leverages pretrained sparse autoencoders (SAEs) to identify sparse features associated with knowledge selection, then performs neuron-level representation engineering and activation-space feature attribution to enable fine-grained, interpretable, and dynamic source control over knowledge retrieval. This approach overcomes key limitations of existing representation engineering and contrastive decoding methods: it improves accuracy by 10% over the former and 15% over the latter on open-domain QA benchmarks, significantly mitigating context–memory knowledge conflicts.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context -- this phenomenon, known as emph{context-memory knowledge conflicts}, can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. Analysing the internal activations of LLMs, we find that they can internally register the signals of knowledge conflict at mid-layers. Such signals allow us to detect whether a knowledge conflict occurs and use emph{inference-time} intervention strategies to resolve it. In this work, we propose extsc{SpARE}, a emph{training-free} representation engineering method that uses pre-trained sparse auto-encoders (SAEs) to control the knowledge selection behaviour of LLMs. extsc{SpARE} identifies the functional features that control the knowledge selection behaviours and applies them to edit the internal activations of LLMs at inference time. Our experimental results show that extsc{SpARE} can effectively control the usage of either knowledge source to resolve knowledge conflict in open-domain question-answering tasks, surpassing existing representation engineering methods ($+10%$) as well as contrastive decoding methods ($+15%$).
Problem

Research questions and friction points this paper is trying to address.

Resolve context-memory knowledge conflicts in LLMs
Control knowledge selection via SAE-based engineering
Enhance open-domain question-answering performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAE-based representation engineering
training-free method
inference-time intervention
🔎 Similar Papers