🤖 AI Summary
Federated recommendation systems (FedRec) suffer from embedding degradation—characterized by weakened personalization capability and dimensional collapse—due to sparse user interactions and heterogeneous preferences. To address this, we propose PLGC, a model-agnostic personalized training strategy that systematically mitigates dimensional collapse in FedRec for the first time. PLGC freezes the global item embedding table, introduces a dynamic balancing mechanism, and employs a contrastive learning objective to suppress embedding redundancy. Furthermore, it leverages the neural tangent kernel to dynamically fuse local and global information, thereby optimizing the geometric structure of the embedding space. Extensive experiments across five real-world datasets demonstrate that PLGC consistently outperforms state-of-the-art baselines, significantly enhancing both the quality of personalized representations and generalization performance.
📝 Abstract
Centralized recommender systems encounter privacy leakage due to the need to collect user behavior and other private data. Hence, federated recommender systems (FedRec) have become a promising approach with an aggregated global model on the server. However, this distributed training paradigm suffers from embedding degradation caused by suboptimal personalization and dimensional collapse, due to the existence of sparse interactions and heterogeneous preferences. To this end, we propose a novel model-agnostic strategy for FedRec to strengthen the personalized embedding utility, which is called Personalized Local-Global Collaboration (PLGC). It is the first research in federated recommendation to alleviate the dimensional collapse issue. Particularly, we incorporate the frozen global item embedding table into local devices. Based on a Neural Tangent Kernel strategy that dynamically balances local and global information, PLGC optimizes personalized representations during forward inference, ultimately converging to user-specific preferences. Additionally, PLGC carries on a contrastive objective function to reduce embedding redundancy by dissolving dependencies between dimensions, thereby improving the backward representation learning process. We introduce PLGC as a model-agnostic personalized training strategy for federated recommendations that can be applied to existing baselines to alleviate embedding degradation. Extensive experiments on five real-world datasets have demonstrated the effectiveness and adaptability of PLGC, which outperforms various baseline algorithms.