SEMU: Singular Value Decomposition for Efficient Machine Unlearning

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the urgent need for efficient and secure machine unlearning in generative foundation models. To overcome key bottlenecks—namely, reliance on full-parameter updates, high computational cost, and dependence on original training data—we propose a low-overhead, data-free unlearning method. For the first time, we integrate Singular Value Decomposition (SVD) into the machine unlearning framework, combining low-dimensional projection with sparse parameter updates to selectively erase specific harmful knowledge. Our approach modifies only ~8% of model parameters—reducing parameter updates by 92% on average—while eliminating the need for original training data entirely. It achieves forgetting accuracy comparable to state-of-the-art methods, significantly improves training stability, and enhances computational efficiency. The method provides a scalable, verifiable technical pathway compliant with emerging AI safety and regulatory requirements.

Technology Category

Application Category

📝 Abstract
While the capabilities of generative foundational models have advanced rapidly in recent years, methods to prevent harmful and unsafe behaviors remain underdeveloped. Among the pressing challenges in AI safety, machine unlearning (MU) has become increasingly critical to meet upcoming safety regulations. Most existing MU approaches focus on altering the most significant parameters of the model. However, these methods often require fine-tuning substantial portions of the model, resulting in high computational costs and training instabilities, which are typically mitigated by access to the original training dataset. In this work, we address these limitations by leveraging Singular Value Decomposition (SVD) to create a compact, low-dimensional projection that enables the selective forgetting of specific data points. We propose Singular Value Decomposition for Efficient Machine Unlearning (SEMU), a novel approach designed to optimize MU in two key aspects. First, SEMU minimizes the number of model parameters that need to be modified, effectively removing unwanted knowledge while making only minimal changes to the model's weights. Second, SEMU eliminates the dependency on the original training dataset, preserving the model's previously acquired knowledge without additional data requirements. Extensive experiments demonstrate that SEMU achieves competitive performance while significantly improving efficiency in terms of both data usage and the number of modified parameters.
Problem

Research questions and friction points this paper is trying to address.

Improves machine unlearning efficiency
Reduces dependency on training data
Minimizes model parameter modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

SVD for compact projection
Minimizes modified parameters
Eliminates original dataset dependency
🔎 Similar Papers
No similar papers found.
Marcin Sendera
Marcin Sendera
PhD Student, Jagiellonian University, Research Intern at Mila - Quebec AI Institute,
deep learningmeta-learningfew-shot learninggenerative modelsnormalizing flows
L
Lukasz Struski
Faculty of Mathematics and Computer Science, Jagiellonian University
K
Kamil Ksika.zek
Faculty of Mathematics and Computer Science, Jagiellonian University
K
Kryspin Musioł
Faculty of Mathematics and Computer Science, Jagiellonian University; Institute of Theoretical and Applied Informatics, Polish Academy of Sciences
Jacek Tabor
Jacek Tabor
Profesor informatyki, Uniwersytet Jagielloński
mathematicscomputer science
D
Dawid Rymarczyk
Faculty of Mathematics and Computer Science, Jagiellonian University; Ardigen SA