🤖 AI Summary
Large language models (LLMs) pose privacy leakage, regulatory compliance risks, and misuse threats in security-sensitive applications due to their retention of sensitive knowledge; existing machine unlearning methods struggle to balance safety and utility under highly entangled data distributions. Method: We propose the first feature-selective representation misdirection framework grounded in activation importance mapping, which achieves targeted suppression of harmful representations while precisely preserving benign capabilities via activation editing and direction-constrained perturbation. Contribution/Results: By integrating representation importance modeling and evaluation on the WMDP benchmark, our method achieves state-of-the-art unlearning performance—even with 20–30% data overlap in highly entangled settings—while incurring significantly lower utility loss than baselines. To our knowledge, this is the first approach to enable controllable, efficient, and robust knowledge forgetting under stringent data entanglement constraints.
📝 Abstract
As large language models (LLMs) are increasingly adopted in safety-critical and regulated sectors, the retention of sensitive or prohibited knowledge introduces escalating risks, ranging from privacy leakage to regulatory non-compliance to to potential misuse, and so on. Recent studies suggest that machine unlearning can help ensure deployed models comply with evolving legal, safety, and governance requirements. However, current unlearning techniques assume clean separation between forget and retain datasets, which is challenging in operational settings characterized by highly entangled distributions. In such scenarios, perturbation-based methods often degrade general model utility or fail to ensure safety. To address this, we propose Selective Representation Misdirection for Unlearning (SRMU), a novel principled activation-editing framework that enforces feature-aware and directionally controlled perturbations. Unlike indiscriminate model weights perturbations, SRMU employs a structured misdirection vector with an activation importance map. The goal is to allow SRMU selectively suppresses harmful representations while preserving the utility on benign ones. Experiments are conducted on the widely used WMDP benchmark across low- and high-entanglement configurations. Empirical results reveal that SRMU delivers state-of-the-art unlearning performance with minimal utility losses, and remains effective under 20-30% overlap where existing baselines collapse. SRMU provides a robust foundation for safety-driven model governance, privacy compliance, and controlled knowledge removal in the emerging LLM-based applications. We release the replication package at https://figshare.com/s/d5931192a8824de26aff.