Effective Skill Unlearning through Intervention and Abstention

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of controllable skill forgetting—specifically for mathematical reasoning and Python programming—in large language models (LLMs), without fine-tuning. We propose a lightweight, training-free dual-path skill unloading method. First, we empirically discover that pre-activation distributions of feed-forward layer neurons strongly correlate with specific skills, and further reveal query separability in the key space. Building on these insights, we design a synergistic framework comprising Neuron Adjust (neuron-level intervention) and Key Space Detection (hypercube-based key-space clustering coupled with attention routing suppression). On multiple benchmarks, our method reduces target skill performance by over 80%, while preserving non-target skills and general knowledge (MMLU) with degradation under 10%—substantially outperforming existing training-free forgetting approaches.

Technology Category

Application Category

📝 Abstract
Large language Models (LLMs) have demonstrated remarkable skills across various domains. Understanding the mechanisms behind their abilities and implementing controls over them is becoming increasingly important for developing better models. In this paper, we focus on skill unlearning in LLMs, specifically unlearning a particular skill while retaining their overall capabilities. We introduce two lightweight, training-free machine skill unlearning techniques for LLMs. First, we observe that the pre-activation distribution of neurons in each Feed-Forward Layer (FFL) differs when the model demonstrates different skills. Additionally, we find that queries triggering the same skill cluster within the FFL key space and can be separated from other queries using a hypercube. Based on these observations, we propose two lightweight, training-free skill unlearning methods via extit{intervention} and extit{abstention} respectively: exttt{Neuron Adjust} and exttt{Key Space Detection}. We evaluate our methods on unlearning math-solving, Python-coding, and comprehension skills across seven different languages. The results demonstrate their strong unlearning capabilities for the designated skills. Specifically, exttt{Key Space Detection} achieves over 80% relative performance drop on the forgetting skill and less than 10% relative performance drop on other skills and the model's general knowledge (MMLU) for most unlearning tasks. Our code is available at https://github.com/Trustworthy-ML-Lab/effective_skill_unlearning
Problem

Research questions and friction points this paper is trying to address.

Unlearning specific skills in LLMs while retaining general capabilities
Developing training-free methods for skill unlearning in LLMs
Evaluating unlearning effectiveness on math, coding, and comprehension skills
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight training-free skill unlearning methods
Neuron Adjust and Key Space Detection techniques
Hypercube-based query separation in FFL key space
🔎 Similar Papers
No similar papers found.