HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation in existing LoRA-based federated fine-tuning approaches, which overlook the latent hierarchical structure among clients, thereby hindering effective shared representation learning and generalization to unseen clients. To overcome this, we propose HiLoRA, a novel framework that deploys low-rank adapters at three hierarchical levels—root, cluster, and leaf—to jointly model global, subgroup-specific, and client-specific knowledge. HiLoRA infers implicit client groupings through adaptive clustering based on LoRA subspace similarity and enforces cross-level orthogonality constraints alongside a cascaded optimization strategy to align and share knowledge across the hierarchy. Evaluated on Vision Transformer backbones using CIFAR-100 and DomainNet benchmarks, HiLoRA demonstrates substantial improvements in both personalized performance and out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) have been widely adopted in vision tasks due to their strong transferability. In Federated Learning (FL), where full fine-tuning is communication heavy, Low-Rank Adaptation (LoRA) provides an efficient and communication-friendly way to adapt ViTs. However, existing LoRA-based federated tuning methods overlook latent client structures in real-world settings, limiting shared representation learning and hindering effective adaptation to unseen clients. To address this, we propose HiLoRA, a hierarchical LoRA framework that places adapters at three levels: root, cluster, and leaf, each designed to capture global, subgroup, and client-specific knowledge, respectively. Through cross-tier orthogonality and cascaded optimization, HiLoRA separates update subspaces and aligns each tier with its residual personalized objective. In particular, we develop a LoRA-Subspace Adaptive Clustering mechanism that infers latent client groups via subspace similarity analysis, thereby facilitating knowledge sharing across structurally aligned clients. Theoretically, we establish a tier-wise generalization analysis that supports HiLoRA's design. Experiments on ViT backbones with CIFAR-100 and DomainNet demonstrate consistent improvements in both personalization and generalization.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Low-Rank Adaptation
Vision Transformers
Personalization
Client Heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Low-Rank Adaptation
Personalized Federated Learning
Vision Transformers
Subspace Adaptive Clustering
Cross-tier Orthogonality
🔎 Similar Papers
No similar papers found.
Z
Zihao Peng
Beijing Normal University
N
Nan Zou
Beijing Normal University
Jiandian Zeng
Jiandian Zeng
Beijing Normal University
Knowledge EngineeringAffective ComputingEdge Intelligence
G
Guo Li
Beijing Normal University
K
Ke Chen
Beijing Normal University
B
Boyuan Li
Zhengzhou University
Tian Wang
Tian Wang
Beijing Normal University
Edge ComputingInternet of ThingsSensor Cloud