Machine Unlearning in Low-Dimensional Feature Subspace

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently and securely removing the influence of specific data from pre-trained models by proposing LOFT, a novel machine unlearning method that operates within a low-dimensional feature subspace. LOFT formulates unlearning as an optimization of a primary projection matrix, which—after a single forward pass to extract features—attenuates the impact of data to be forgotten while preserving model performance on retained data. Leveraging data separability, LOFT introduces a lightweight, plug-in projection mechanism that eliminates the need for repeated access to original training data or full-model retraining, thereby substantially reducing computational overhead and privacy risks. Extensive experiments demonstrate that LOFT consistently achieves superior unlearning efficacy and higher efficiency across diverse models, datasets, and tasks.

Technology Category

Application Category

📝 Abstract
Machine Unlearning (MU) aims at removing the influence of specific data from a pretrained model while preserving performance on the remaining data. In this work, a novel perspective for MU is presented upon low-dimensional feature subspaces, which gives rise to the potentials of separating the remaining and forgetting data herein. This separability motivates our LOFT, a method that proceeds unlearning in a LOw-dimensional FeaTure subspace from the pretrained model skithrough principal projections, which are optimized to maximally capture the information of the remaining data and meanwhile diminish that of the forgetting data. In training, LOFT simply optimizes a small-size projection matrix flexibly plugged into the pretrained model, and only requires one-shot feature fetching from the pretrained backbone instead of repetitively accessing the raw data. Hence, LOFT mitigates two critical issues in mainstream MU methods, i.e., the privacy leakage risk from massive data reload and the inefficiency of updates to the entire pretrained model. Extensive experiments validate the significantly lower computational overhead and superior unlearning performance of LOFT across diverse models, datasets, tasks, and applications. Code is anonymously available at https://anonymous.4open.science/r/4352/.
Problem

Research questions and friction points this paper is trying to address.

Machine Unlearning
Low-dimensional Feature Subspace
Data Removal
Privacy Preservation
Model Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Machine Unlearning
Low-dimensional Feature Subspace
Principal Projection
One-shot Feature Extraction
Privacy-preserving
🔎 Similar Papers
No similar papers found.