APFL: Analytic Personalized Federated Learning via Dual-Stream Least Squares

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization and limited personalization in personalized federated learning (PFL) under non-IID data, this paper proposes a dual-stream least-squares analysis framework. It freezes a shared backbone network to extract robust features and decouples learning into a shared main stream—ensuring global generalization—and a client-specific auxiliary stream—enabling individual customization—whose closed-form least-squares solution is analytically derived. Theoretically, this design guarantees model consistency and invariance to heterogeneous data distributions, thereby isolating each client’s personalized model from interference by other clients’ data. Extensive experiments on multiple benchmark datasets demonstrate that our method consistently outperforms state-of-the-art approaches, achieving accuracy gains of 1.10%–15.45%. The approach is computationally efficient, robust to data heterogeneity, and endowed with strong theoretical interpretability.

Technology Category

Application Category

📝 Abstract
Personalized Federated Learning (PFL) has presented a significant challenge to deliver personalized models to individual clients through collaborative training. Existing PFL methods are often vulnerable to non-IID data, which severely hinders collective generalization and then compromises the subsequent personalization efforts. In this paper, to address this non-IID issue in PFL, we propose an Analytic Personalized Federated Learning (APFL) approach via dual-stream least squares. In our APFL, we use a foundation model as a frozen backbone for feature extraction. Subsequent to the feature extractor, we develop dual-stream analytic models to achieve both collective generalization and individual personalization. Specifically, our APFL incorporates a shared primary stream for global generalization across all clients, and a dedicated refinement stream for local personalization of each individual client. The analytical solutions of our APFL enable its ideal property of heterogeneity invariance, theoretically meaning that each personalized model remains identical regardless of how heterogeneous the data are distributed across all other clients. Empirical results across various datasets also validate the superiority of our APFL over state-of-the-art baselines, with advantages of at least 1.10%-15.45% in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses non-IID data challenges in Personalized Federated Learning
Proposes dual-stream least squares for global and local model optimization
Ensures heterogeneity invariance in personalized models across clients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-stream least squares for PFL
Frozen backbone foundation model
Heterogeneity invariance via analytical solutions
🔎 Similar Papers
No similar papers found.