Robust Machine Unlearning for Quantized Neural Networks via Adaptive Gradient Reweighting with Similar Labels

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two key challenges in implementing machine unlearning (MU) for quantized neural networks on edge devices: (1) semantic noise amplification caused by label mismatch under quantization, and (2) gradient imbalance between forget and retain data. We propose Q-MUL, the first quantization-aware MU framework. Its core contributions are: (1) a semantic similarity–guided label redistribution mechanism to suppress noise propagation induced by weight/activation discretization; and (2) an adaptive gradient reweighting strategy that dynamically balances update contributions from forget and retain subsets. Grounded in quantization-aware training and discrete optimization theory, Q-MUL achieves state-of-the-art performance across multiple benchmarks: residual accuracy after unlearning ≤ 0.5%, retained model utility ≥ 98% of the original accuracy, and full support for efficient 4-bit deployment.

Technology Category

Application Category

📝 Abstract
Model quantization enables efficient deployment of deep neural networks on edge devices through low-bit parameter representation, yet raises critical challenges for implementing machine unlearning (MU) under data privacy regulations. Existing MU methods designed for full-precision models fail to address two fundamental limitations in quantized networks: 1) Noise amplification from label mismatch during data processing, and 2) Gradient imbalance between forgotten and retained data during training. These issues are exacerbated by quantized models' constrained parameter space and discrete optimization. We propose Q-MUL, the first dedicated unlearning framework for quantized models. Our method introduces two key innovations: 1) Similar Labels assignment replaces random labels with semantically consistent alternatives to minimize noise injection, and 2) Adaptive Gradient Reweighting dynamically aligns parameter update contributions from forgotten and retained data. Through systematic analysis of quantized model vulnerabilities, we establish theoretical foundations for these mechanisms. Extensive evaluations on benchmark datasets demonstrate Q-MUL's superiority over existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses noise amplification in quantized neural networks.
Solves gradient imbalance in machine unlearning for quantized models.
Enhances data privacy compliance for edge-deployed neural networks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Similar Labels assignment reduces noise injection
Adaptive Gradient Reweighting balances parameter updates
Q-MUL framework designed for quantized models
🔎 Similar Papers
No similar papers found.