FG-OrIU: Towards Better Forgetting via Feature-Gradient Orthogonality for Incremental Unlearning

📅 2026-01-20
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Existing incremental unlearning methods struggle to achieve deep forgetting, often leaving recoverable information that compromises the balance between forgetting and retaining knowledge. This work proposes a novel deep unlearning framework that, for the first time, jointly imposes orthogonality constraints at both feature and gradient levels. By leveraging singular value decomposition, the method constructs dynamic subspaces: it applies orthogonal projection to data targeted for forgetting to ensure irreversibility, while adaptively shrinking the retention subspace to stabilize preserved knowledge. Extensive experiments demonstrate that the proposed approach effectively prevents information leakage and achieves strong unlearning performance, all while significantly maintaining model accuracy on retained tasks.

Technology Category

Application Category

📝 Abstract
Incremental unlearning (IU) is critical for pre-trained models to comply with sequential data deletion requests, yet existing methods primarily suppress parameters or confuse knowledge without explicit constraints on both feature and gradient level, resulting in \textit{superficial forgetting} where residual information remains recoverable. This incomplete forgetting risks security breaches and disrupts retention balance, especially in IU scenarios. We propose FG-OrIU (\textbf{F}eature-\textbf{G}radient \textbf{Or}thogonality for \textbf{I}ncremental \textbf{U}nlearning), the first framework unifying orthogonal constraints on both features and gradients level to achieve deep forgetting, where the forgetting effect is irreversible. FG-OrIU decomposes feature spaces via Singular Value Decomposition (SVD), separating forgetting and remaining class features into distinct subspaces. It then enforces dual constraints: feature orthogonal projection on both forgetting and remaining classes, while gradient orthogonal projection prevents the reintroduction of forgotten knowledge and disruption to remaining classes during updates. Additionally, dynamic subspace adaptation merges newly forgetting subspaces and contracts remaining subspaces, ensuring a stable balance between removal and retention across sequential unlearning tasks. Extensive experiments demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Incremental Unlearning
Superficial Forgetting
Feature-Gradient Orthogonality
Data Deletion
Retention Balance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature-Gradient Orthogonality
Incremental Unlearning
Deep Forgetting
Singular Value Decomposition
Dynamic Subspace Adaptation
🔎 Similar Papers
No similar papers found.