One-shot Federated Learning Methods: A Practical Guide

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically analyzes performance bottlenecks in one-shot federated learning (OFL) arising from data and model heterogeneity. It proposes the first unified taxonomy for OFL methods, encompassing model aggregation optimization, client adaptation, heterogeneity alignment, meta-learning, and knowledge distillation. For the first time, it characterizes the inherent trade-offs between performance and efficiency across heterogeneous mitigation strategies and identifies critical deployment barriers. The study constructs a structured panoramic map of OFL methodologies, clarifying promising directions for technical integration. Empirical validation demonstrates that single-round collaborative modeling reduces communication overhead by over 90% compared to conventional federated learning—providing theoretically grounded, practically viable solutions for privacy-sensitive, low-bandwidth applications, such as lightweight collaborative training of large language models.

Technology Category

Application Category

📝 Abstract
One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL). OFL demonstrates the practical potential for integration with future approaches that require collaborative training models, such as large language models (LLMs). However, current OFL methods face two major challenges: data heterogeneity and model heterogeneity, which result in subpar performance compared to conventional FL methods. Worse still, despite numerous studies addressing these limitations, a comprehensive summary is still lacking. To address these gaps, this paper presents a systematic analysis of the challenges faced by OFL and thoroughly reviews the current methods. We also offer an innovative categorization method and analyze the trade-offs of various techniques. Additionally, we discuss the most promising future directions and the technologies that should be integrated into the OFL field. This work aims to provide guidance and insights for future research.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy in federated learning
Reduces communication overhead in distributed systems
Improves performance in heterogeneous data models
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot Federated Learning paradigm
Addresses privacy, communication overhead
Systematic analysis, categorization method
🔎 Similar Papers
No similar papers found.
X
Xiang Liu
National University of Singapore
Zhenheng Tang
Zhenheng Tang
The Hong Kong University of Science and Technology
Machine LearningML SystemsLarge Language ModelPersonal AI
X
Xia Li
Department of Information Technology and Electrical Engineering, ETH Zürich
Y
Yijun Song
Zhejiang University of Finance & Economics Dongfang College
Sijie Ji
Sijie Ji
Schmidt Science Fellow, Caltech & UCLA
Cyber Physical SystemsSensingAIoTMobile HealthWearable Computing
Zemin Liu
Zemin Liu
Zhejiang University
Graph LearningGraph Imbalanced Learning
B
Bo Han
Department of Computer Science, Hong Kong Baptist University
Linshan Jiang
Linshan Jiang
Research Fellow, Institute of Data Science (IDS), NUS
Privacy_preserving_Machine_learningCollaborative Machine LearningEdge-Cloud CollaborationWeb3
J
Jialin Li
National University of Singapore