🤖 AI Summary
This work identifies a novel privacy threat in federated unlearning: when gradient differences serve as verifiable proofs of unlearning (PoFU), an honest-but-curious auditor can reconstruct the unlearned samples by inverting these gradients. To address this, we propose IGF—the first learning-based gradient difference inversion framework—integrating SVD-based dimensionality reduction, pixel-level inverse generation, and a structure-semantic composite loss to achieve high-fidelity image reconstruction (PSNR improved by 8.2 dB). We further design an orthogonal confusion defense that preserves PoFU verification accuracy ≥99.1% while raising reconstruction failure rate to >99.7%. This is the first work to reveal that gradient differences implicitly encode semantic information of unlearned data, and the first to achieve a federated unlearning scheme that simultaneously ensures verifiability and strong privacy protection.
📝 Abstract
Federated Unlearning (FU) has emerged as a critical compliance mechanism for data privacy regulations, requiring unlearned clients to provide verifiable Proof of Federated Unlearning (PoFU) to auditors upon data removal requests. However, we uncover a significant privacy vulnerability: when gradient differences are used as PoFU, honest-but-curious auditors may exploit mathematical correlations between gradient differences and forgotten samples to reconstruct the latter. Such reconstruction, if feasible, would face three key challenges: (i) restricted auditor access to client-side data, (ii) limited samples derivable from individual PoFU, and (iii) high-dimensional redundancy in gradient differences. To overcome these challenges, we propose Inverting Gradient difference to Forgotten data (IGF), a novel learning-based reconstruction attack framework that employs Singular Value Decomposition (SVD) for dimensionality reduction and feature extraction. IGF incorporates a tailored pixel-level inversion model optimized via a composite loss that captures both structural and semantic cues. This enables efficient and high-fidelity reconstruction of large-scale samples, surpassing existing methods. To counter this novel attack, we design an orthogonal obfuscation defense that preserves PoFU verification utility while preventing sensitive forgotten data reconstruction. Experiments across multiple datasets validate the effectiveness of the attack and the robustness of the defense. The code is available at https://anonymous.4open.science/r/IGF.