🤖 AI Summary
Addressing the challenge of simultaneously achieving entity-level selective forgetting and client-level complete forgetting in federated graph learning (FGL), this paper proposes the first unified dual-branch unlearning framework. The framework employs a prototype-gradient-guided meta-unlearning mechanism to trace and erase cross-client knowledge, while leveraging adversarial graph generation to reconstruct local subgraphs and eliminate residual relational knowledge. Designed for plug-and-play integration, it is compatible with mainstream FGL systems. Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms existing approaches in both forgetting tasks: accuracy improves by up to 12.7% after entity unlearning, and model bias decreases by 41.3% following client unlearning—all while maintaining stable inference performance. These results validate the framework’s effectiveness and generalizability as a privacy-enhancing module for FGL.
📝 Abstract
The demand for data privacy has led to the development of frameworks like Federated Graph Learning (FGL), which facilitate decentralized model training. However, a significant operational challenge in such systems is adhering to the right to be forgotten. This principle necessitates robust mechanisms for two distinct types of data removal: the selective erasure of specific entities and their associated knowledge from local subgraphs and the wholesale removal of a user's entire dataset and influence. Existing methods often struggle to fully address both unlearning requirements, frequently resulting in incomplete data removal or the persistence of residual knowledge within the system. This work introduces a unified framework, conceived to provide a comprehensive solution to these challenges. The proposed framework employs a bifurcated strategy tailored to the specific unlearning request. For fine-grained Meta Unlearning, it uses prototype gradients to direct the initial local forgetting process, which is then refined by generating adversarial graphs to eliminate any remaining data traces among affected clients. In the case of complete client unlearning, the framework utilizes adversarial graph generation exclusively to purge the departed client's contributions from the remaining network. Extensive experiments on multiple benchmark datasets validate the proposed approach. The framework achieves substantial improvements in model prediction accuracy across both client and meta-unlearning scenarios when compared to existing methods. Furthermore, additional studies confirm its utility as a plug-in module, where it materially enhances the predictive capabilities and unlearning effectiveness of other established methods.