TransFR: Transferable Federated Recommendation with Pre-trained Language Models

📅 2024-02-02
🏛️ arXiv.org
📈 Citations: 4
Influential: 2
📄 PDF
🤖 AI Summary
Traditional federated recommendation systems (FRS) suffer from three critical bottlenecks: poor cross-domain transferability, ineffectiveness under cold-start conditions, and privacy leakage. To address these challenges, this paper proposes the first transferable framework integrating general-purpose textual representations with federated learning. Our method leverages pre-trained language models (e.g., BERT) to generate domain-agnostic item semantic embeddings—eliminating reliance on discrete item IDs—and jointly optimizes a federated fine-tuning procedure with locally personalized prediction heads to enhance cold-start robustness. Furthermore, we incorporate differential privacy into the training process to ensure user-level privacy protection. Extensive experiments on multiple benchmark datasets demonstrate that our approach significantly outperforms state-of-the-art methods, achieving substantial improvements in recommendation accuracy, cross-domain transferability, and cold-start performance, while rigorously preserving privacy guarantees.

Technology Category

Application Category

📝 Abstract
Federated recommendations (FRs), facilitating multiple local clients to collectively learn a global model without disclosing user private data, have emerged as a prevalent architecture for privacy-preserving recommendations. In conventional FRs, a dominant paradigm is to utilize discrete identities to represent users/clients and items, which are subsequently mapped to domain-specific embeddings to participate in model training. Despite considerable performance, we reveal three inherent limitations that can not be ignored in federated settings, i.e., non-transferability across domains, unavailability in cold-start settings, and potential privacy violations during federated training. To this end, we propose a transferable federated recommendation model with universal textual representations, TransFR, which delicately incorporates the general capabilities empowered by pre-trained language models and the personalized abilities by fine-tuning local private data. Specifically, it first learns domain-agnostic representations of items by exploiting pre-trained models with public textual corpora. To tailor for federated recommendation, we further introduce an efficient federated fine-tuning and a local training mechanism. This facilitates personalized local heads for each client by utilizing their private behavior data. By incorporating pre-training and fine-tuning within FRs, it greatly improves the adaptation efficiency transferring to a new domain and the generalization capacity to address cold-start issues. Through extensive experiments on several datasets, we demonstrate that our TransFR model surpasses several state-of-the-art FRs in terms of accuracy, transferability, and privacy.
Problem

Research questions and friction points this paper is trying to address.

Enhances transferability across domains in federated recommendations
Addresses cold-start ineffectiveness in federated recommendation systems
Mitigates privacy risks during federated training processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes pre-trained models for domain-agnostic representations
Introduces federated adapter-tuning for personalization
Implements test-time adaptation for private data fitting
🔎 Similar Papers
No similar papers found.
H
Honglei Zhang
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education; School of Computer and Information Technology, Beijing Jiaotong University, China
H
He Liu
School of Computer and Information Technology, Beijing Jiaotong University, China
H
Haoxuan Li
Center for Data Science, Peking University, China
Yidong Li
Yidong Li
Beijing Jiaotong University
privacy preservingdata miningsocial network analysismultimedia computing