Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address rigid LoRA parameter allocation and inefficient local training strategies for fine-tuning large models on resource-heterogeneous clients in federated learning, this paper proposes a dynamically adaptive LoRA layer allocation framework. We introduce a novel heterogeneous LoRA selection strategy that jointly incorporates Fisher information (gradient-based importance) and intrinsic geometric structural importance (e.g., triangular or bottleneck patterns), further enhanced by randomized geometric assignment to improve robustness. The method enables clients to dynamically activate locally trainable parameters based on both computational capacity and layer-wise importance, facilitating resource-aware efficient fine-tuning under Non-IID data. Extensive experiments across five datasets and three data distribution settings—from IID to highly skewed Non-IID—demonstrate an average accuracy improvement of 3.2–7.8% under identical communication and computational budgets, significantly outperforming existing federated LoRA approaches.

Technology Category

Application Category

📝 Abstract
Federated Learning has recently been utilized to collaboratively fine-tune foundation models across multiple clients. Notably, federated low-rank adaptation LoRA-based fine-tuning methods have recently gained attention, which allows clients to fine-tune FMs with a small portion of trainable parameters locally. However, most existing methods do not account for the heterogeneous resources of clients or lack an effective local training strategy to maximize global fine-tuning performance under limited resources. In this work, we propose Fed-HeLLo, a novel federated LoRA-based fine-tuning framework that enables clients to collaboratively fine-tune an FM with different local trainable LoRA layers. To ensure its effectiveness, we develop several heterogeneous LoRA allocation (HLA) strategies that adaptively allocate local trainable LoRA layers based on clients' resource capabilities and the layer importance. Specifically, based on the dynamic layer importance, we design a Fisher Information Matrix score-based HLA that leverages dynamic gradient norm information. To better stabilize the training process, we consider the intrinsic importance of LoRA layers and design a Geometrically-Defined HLA strategy. It shapes the collective distribution of trainable LoRA layers into specific geometric patterns, such as Triangle, Inverted Triangle, Bottleneck, and Uniform. Moreover, we extend GD-HLA into a randomized version, named Randomized Geometrically-Defined HLA, for enhanced model accuracy with randomness. By co-designing the proposed HLA strategies, we incorporate both the dynamic and intrinsic layer importance into the design of our HLA strategy. We evaluate our approach on five datasets under diverse federated LoRA fine-tuning settings, covering three levels of data distribution from IID to extreme Non-IID. Results show that Fed-HeLLo with HLA strategies is both effective and efficient.
Problem

Research questions and friction points this paper is trying to address.

Addresses heterogeneous client resources in federated learning
Optimizes LoRA layer allocation for global fine-tuning performance
Combines dynamic and intrinsic layer importance in HLA strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous LoRA allocation based on resource capabilities
Fisher Information Matrix score-based dynamic allocation
Geometrically-Defined HLA with randomized patterns
🔎 Similar Papers
No similar papers found.
Z
Zikai Zhang
Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV , 89557, USA
Ping Liu
Ping Liu
Assistant Professor, Krannert School of Management, Purdue University
Contract theoryGame theoryMacro financeReal OptionsDynamic and Empirical corporate finance
Jiahao Xu
Jiahao Xu
Nanyang Technological University
LLM Efficient ReasoningNMTAudio TranslationSentence Embeddings
R
Rui Hu
Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV , 89557, USA