Fully Decentralized Certified Unlearning

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses certified machine unlearning in decentralized networks—without a central coordinator—required for compliance with privacy regulations (e.g., GDPR’s “right to be forgotten”) or defense against data poisoning attacks, wherein the influence of specific training samples on the global model must be provably erased. We propose RR-DU, a novel method leveraging random-walk-based message propagation, integrated with projected gradient ascent, geometrically decaying step sizes, subsampled Gaussian noise, Rényi differential privacy, and trust-region constraints. To our knowledge, RR-DU is the first approach to achieve network-level (ε,δ)-certified unlearning on fixed-topology decentralized networks, provides a theoretical bound on deletion capacity, and characterizes the unique impact of decentralization on the privacy–utility trade-off. Experiments on MNIST and CIFAR-10 demonstrate that, under identical (ε,δ) privacy budgets, RR-DU outperforms decentralized DP baselines in test accuracy while reducing prediction accuracy on the forget set to ≈10%, approaching the random-guess baseline.

Technology Category

Application Category

📝 Abstract
Machine unlearning (MU) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning. While certified unlearning has been analyzed in centralized and server-orchestrated federated settings (via guarantees analogous to differential privacy, DP), the decentralized setting -- where peers communicate without a coordinator remains underexplored. We study certified unlearning in decentralized networks with fixed topologies and propose RR-DU, a random-walk procedure that performs one projected gradient ascent step on the forget set at the unlearning client and a geometrically distributed number of projected descent steps on the retained data elsewhere, combined with subsampled Gaussian noise and projection onto a trust region around the original model. We provide (i) convergence guarantees in the convex case and stationarity guarantees in the nonconvex case, (ii) $(varepsilon,δ)$ network-unlearning certificates on client views via subsampled Gaussian $Rényi$ DP (RDP) with segment-level subsampling, and (iii) deletion-capacity bounds that scale with the forget-to-local data ratio and quantify the effect of decentralization (network mixing and randomized subsampling) on the privacy--utility trade-off. Empirically, on image benchmarks (MNIST, CIFAR-10), RR-DU matches a given $(varepsilon,δ)$ while achieving higher test accuracy than decentralized DP baselines and reducing forget accuracy to random guessing ($approx 10%$).
Problem

Research questions and friction points this paper is trying to address.

Certified unlearning in decentralized networks with fixed topologies.
Removing influence of specified data from trained models for privacy.
Balancing privacy-utility trade-off via decentralized random-walk procedures.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random-walk gradient ascent-descent with noise injection
Subsampled Gaussian Rényi DP for network-wide certificates
Trust region projection to maintain model stability
🔎 Similar Papers
No similar papers found.