Scaling Law Analysis in Federated Learning: How to Select the Optimal Model Size?

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical problem of model size selection in federated learning (FL), where both data and computational resources are distributed across clients. We establish, for the first time, a PAC-Bayes–based generalization error upper bound for FL that explicitly characterizes the trade-off between model scale and system heterogeneity. Our analysis reveals a negative power-law relationship between the optimal model size and the number of clients $N$ (i.e., $sim N^{-alpha}$), demonstrating that federation inherently constrains model capacity. Building on this, we propose a *computation-aware* model selection principle: under a fixed total computational budget, the model size should be dynamically adjusted according to clients’ average compute capability. Extensive experiments across multiple models and datasets validate that increasing client count necessitates reducing model size to improve generalization. This study provides the first theoretically grounded, scalable framework for efficient large-model training at the network edge.

Technology Category

Application Category

📝 Abstract
The recent success of large language models (LLMs) has sparked a growing interest in training large-scale models. As the model size continues to scale, concerns are growing about the depletion of high-quality, well-curated training data. This has led practitioners to explore training approaches like Federated Learning (FL), which can leverage the abundant data on edge devices while maintaining privacy. However, the decentralization of training datasets in FL introduces challenges to scaling large models, a topic that remains under-explored. This paper fills this gap and provides qualitative insights on generalizing the previous model scaling experience to federated learning scenarios. Specifically, we derive a PAC-Bayes (Probably Approximately Correct Bayesian) upper bound for the generalization error of models trained with stochastic algorithms in federated settings and quantify the impact of distributed training data on the optimal model size by finding the analytic solution of model size that minimizes this bound. Our theoretical results demonstrate that the optimal model size has a negative power law relationship with the number of clients if the total training compute is unchanged. Besides, we also find that switching to FL with the same training compute will inevitably reduce the upper bound of generalization performance that the model can achieve through training, and that estimating the optimal model size in federated scenarios should depend on the average training compute across clients. Furthermore, we also empirically validate the correctness of our results with extensive training runs on different models, network settings, and datasets.
Problem

Research questions and friction points this paper is trying to address.

Determining optimal model size in federated learning scenarios
Analyzing how distributed training data impacts model scaling
Quantifying generalization performance bounds in federated settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

PAC-Bayes bound for FL generalization error
Analytic solution for optimal model size
Negative power law with client count
🔎 Similar Papers
No similar papers found.
X
Xuanyu Chen
The University of Sydney
N
Nan Yang
The University of Sydney
S
Shuai Wang
Northwestern Polytechnical University
Dong Yuan
Dong Yuan
the University of Sydney
cloud and edge computingAIdeep learninginternet of thingsworkflow