π€ AI Summary
This work addresses the limitation of existing federated recommendation systems, which typically assume uniform user privacy preferences and thus fail to support personalized data sharing and withdrawal. To overcome this, we propose FedShare, a novel framework that, for the first time, enables both dynamic personalized data sharing and efficient unlearning in federated recommendation settings. Users can selectively share interaction data to enhance recommendation quality and later revoke their data along with its influence on the model. Notably, FedShare eliminates the need to store extensive historical gradients; instead, it leverages only a few embedding snapshots to precisely erase data traces. By integrating server-side high-order userβitem graph modeling, contrastive learning for aligning local and global representations, and a snapshot-based contrastive unlearning mechanism, FedShare achieves state-of-the-art recommendation performance while significantly reducing storage overhead during unlearning, as demonstrated on three public benchmarks.
π Abstract
Federated recommender systems (FedRS) have emerged as a paradigm for protecting user privacy by keeping interaction data on local devices while coordinating model training through a central server. However, most existing federated recommender systems adopt a one-size-fits-all assumption on user privacy, where all users are required to keep their data strictly local. This setting overlooks users who are willing to share their data with the server in exchange for better recommendation performance. Although several recent studies have explored personalized user data sharing in FedRS, they assume static user privacy preferences and cannot handle user requests to remove previously shared data and its corresponding influence on the trained model. To address this limitation, we propose FedShare, a federated learn-unlearn framework for recommender systems with personalized user data sharing. FedShare not only allows users to control how much interaction data is shared with the server, but also supports data unsharing requests by removing the influence of the unshared data from the trained model. Specifically, FedShare leverages shared data to construct a server-side high-order user-item graph and uses contrastive learning to jointly align local and global representations. In the unlearning phase, we design a contrastive unlearning mechanism that selectively removes representations induced by the unshared data using a small number of historical embedding snapshots, avoiding the need to store large amounts of historical gradient information as required by existing federated recommendation unlearning methods. Extensive experiments on three public datasets demonstrate that FedShare achieves strong recommendation performance in both the learning and unlearning phases, while significantly reducing storage overhead in the unlearning phase compared with state-of-the-art baselines.