On Effects of Steering Latent Representation for Large Language Model Unlearning

📅 2024-08-12
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the “selective forgetting” challenge in large language models (LLMs), proposing Adaptive Representation Misdirection (Adaptive RMU)—a method that manipulates intermediate-layer latent representations to attenuate model memory of targeted information and enhance robustness against adversarial jailbreak attacks. Unlike the original RMU, which degrades in middle-to-later layers, this study provides the first theoretical analysis establishing a causal relationship between representation guidance and forgetting efficacy, revealing why early-layer intervention yields superior forgetting and deriving a layer-wise optimal coefficient allocation principle. The method incurs no additional training or inference overhead. Empirical evaluations demonstrate substantial improvements in forgetting performance across all transformer layers and confirm strong cross-model generalizability.

Technology Category

Application Category

📝 Abstract
Representation Misdirection for Unlearning (RMU), which steers model representation in the intermediate layer to a target random representation, is an effective method for large language model (LLM) unlearning. Despite its high performance, the underlying cause and explanation remain underexplored. In this paper, we theoretically demonstrate that steering forget representations in the intermediate layer reduces token confidence, causing LLMs to generate wrong or nonsense responses. We investigate how the coefficient influences the alignment of forget-sample representations with the random direction and hint at the optimal coefficient values for effective unlearning across different network layers. We show that RMU unlearned models are robust against adversarial jailbreak attacks. Furthermore, our empirical analysis shows that RMU is less effective when applied to the middle and later layers in LLMs. To resolve this drawback, we propose Adaptive RMU--a simple yet effective alternative method that makes unlearning effective with most layers. Extensive experiments demonstrate that Adaptive RMU significantly improves the unlearning performance compared to prior art while incurring no additional computational cost.
Problem

Research questions and friction points this paper is trying to address.

Explains RMU's mechanism for LLM unlearning
Identifies optimal coefficients for effective unlearning
Proposes Adaptive RMU for enhanced unlearning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Steers model representation
Reduces token confidence
Adaptive RMU enhances unlearning
🔎 Similar Papers
2024-06-22International Conference on Computational LinguisticsCitations: 4
D
Dang Huu-Tien
JAIST
T
Trung-Tin Pham
JAIST
H
Hoang Thanh-Tung
Vietnam National University, Hanoi
N
Naoya Inoue
JAIST, RIKEN