CRISP: Persistent Concept Unlearning via Sparse Autoencoders

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing inference-time interventions for erasing specific harmful knowledge in large language models (LLMs) are vulnerable to circumvention and cannot modify model parameters. Method: We propose CRISP, the first parameter-level concept unlearning method based on sparse autoencoders (SAEs). CRISP employs multi-layer feature importance analysis and semantic consistency disentanglement to precisely identify and suppress cross-layer critical semantic features, ensuring irreversible and robust forgetting. Contribution/Results: CRISP preserves general and domain-specific capabilities while significantly enhancing resilience against malicious parameter recovery attacks. On the WMDP benchmark—a safety-critical unlearning evaluation suite—CRISP outperforms all existing methods across all metrics, demonstrating both effectiveness and practical viability for real-world deployment.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly deployed in real-world applications, the need to selectively remove unwanted knowledge while preserving model utility has become paramount. Recent work has explored sparse autoencoders (SAEs) to perform precise interventions on monosemantic features. However, most SAE-based methods operate at inference time, which does not create persistent changes in the model's parameters. Such interventions can be bypassed or reversed by malicious actors with parameter access. We introduce CRISP, a parameter-efficient method for persistent concept unlearning using SAEs. CRISP automatically identifies salient SAE features across multiple layers and suppresses their activations. We experiment with two LLMs and show that our method outperforms prior approaches on safety-critical unlearning tasks from the WMDP benchmark, successfully removing harmful knowledge while preserving general and in-domain capabilities. Feature-level analysis reveals that CRISP achieves semantically coherent separation between target and benign concepts, allowing precise suppression of the target features.
Problem

Research questions and friction points this paper is trying to address.

Persistent removal of unwanted knowledge in LLMs
Preventing malicious reversal of concept unlearning interventions
Achieving precise feature suppression while preserving model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persistent concept unlearning via sparse autoencoders
Automatically identifies salient SAE features across layers
Suppresses target feature activations while preserving utility
🔎 Similar Papers
No similar papers found.