Rotation Control Unlearning: Quantifying and Controlling Continuous Unlearning for LLM with The Cognitive Rotation Space

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the cumulative utility degradation of large language models (LLMs) under sequential machine unlearning requests—caused by reliance on retained data—we propose a novel data-free cognitive rotation space unlearning paradigm. Our method models parameter updates as orthogonal rotations in a learned cognitive space: (i) an antisymmetric loss constrains rotation direction; (ii) rotation significance weights govern fine-grained forgetting granularity; and (iii) orthogonal rotation-axis regularization minimizes interference across multiple unlearning rounds. This enables angular, quantifiable, and controllable unlearning directly in parameter space. Experiments across multiple benchmarks demonstrate state-of-the-art data-free unlearning performance, significantly mitigating catastrophic utility loss while preserving both security guarantees and practical model utility.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) become increasingly prevalent, their security vulnerabilities have already drawn attention. Machine unlearning is introduced to seek to mitigate these risks by removing the influence of undesirable data. However, existing methods not only rely on the retained dataset to preserve model utility, but also suffer from cumulative catastrophic utility loss under continuous unlearning requests. To solve this dilemma, we propose a novel method, called Rotation Control Unlearning (RCU), which leverages the rotational salience weight of RCU to quantify and control the unlearning degree in the continuous unlearning process. The skew symmetric loss is designed to construct the existence of the cognitive rotation space, where the changes of rotational angle can simulate the continuous unlearning process. Furthermore, we design an orthogonal rotation axes regularization to enforce mutually perpendicular rotation directions for continuous unlearning requests, effectively minimizing interference and addressing cumulative catastrophic utility loss. Experiments on multiple datasets confirm that our method without retained dataset achieves SOTA performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing cumulative catastrophic utility loss in continuous LLM unlearning
Quantifying unlearning degree without relying on retained datasets
Controlling interference through orthogonal rotation axes regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages rotational salience weight to control unlearning degree
Uses skew symmetric loss to construct cognitive rotation space
Employs orthogonal rotation axes to minimize interference
🔎 Similar Papers
No similar papers found.
X
Xiang Zhang
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi, China
Kun Wei
Kun Wei
School of Computer Science, Northwestern Polytechnical University
deep learningcompute sciencespeech
X
Xu Yang
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi, China
Chenghao Xu
Chenghao Xu
EPFL
RoboticsDynamic SLAMActive Vision
S
Su Yan
School of Electronic Engineering, Xidian University, Xi’an, Shaanxi, China
Cheng Deng
Cheng Deng
University of Edinburgh
On-device LLMNLPGeoAI