A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, personalized fine-tuning (PFT) often distorts global features due to data heterogeneity, degrading both generalization and personalization performance. To address this, we propose LP-FT—a two-stage fine-tuning strategy: first, linear probing with a frozen backbone and optimized classifier head preserves global feature integrity; second, constrained full-model fine-tuning adapts to local data distributions. We are the first to identify and characterize feature distortion in federated settings, design a staged parameter update mechanism, and theoretically establish LP-FT’s superiority under partial feature overlap and covariate-concept drift. Extensive experiments across seven datasets and six variants demonstrate that LP-FT consistently outperforms state-of-the-art methods—stabilizing global representations, enhancing model robustness, and improving cross-domain adaptability. Our work establishes a novel, theoretically grounded, and practically viable paradigm for personalized federated learning under heterogeneity.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables decentralized, privacy-preserving model training but struggles to balance global generalization and local personalization due to non-identical data distributions across clients. Personalized Fine-Tuning (PFT), a popular post-hoc solution, fine-tunes the final global model locally but often overfits to skewed client distributions or fails under domain shifts. We propose adapting Linear Probing followed by full Fine-Tuning (LP-FT), a principled centralized strategy for alleviating feature distortion (Kumar et al., 2022), to the FL setting. Through systematic evaluation across seven datasets and six PFT variants, we demonstrate LP-FT's superiority in balancing personalization and generalization. Our analysis uncovers federated feature distortion, a phenomenon where local fine-tuning destabilizes globally learned features, and theoretically characterizes how LP-FT mitigates this via phased parameter updates. We further establish conditions (e.g., partial feature overlap, covariate-concept shift) under which LP-FT outperforms standard fine-tuning, offering actionable guidelines for deploying robust personalization in FL.
Problem

Research questions and friction points this paper is trying to address.

Balancing global generalization with local personalization in federated learning
Addressing overfitting to skewed client data distributions during fine-tuning
Mitigating federated feature distortion caused by local fine-tuning processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

LP-FT adapts centralized strategy to federated learning
Phased parameter updates mitigate federated feature distortion
Balances personalization and generalization across diverse datasets
🔎 Similar Papers
M
Minghui Chen
University of British Columbia & Vector Institute
H
Hrad Ghoukasian
McMaster University & Vector Institute
R
Ruinan Jin
University of British Columbia & Vector Institute
Zehua Wang
Zehua Wang
Prof. of Blockchain at UBC
blockchain systemscybersecuritymechanism designcommunication systems
Sai Praneeth Karimireddy
Sai Praneeth Karimireddy
USC
Machine LearningOptimizationPrivacyFederated learningData economy
X
Xiaoxiao Li
University of British Columbia & Vector Institute