Roughness-Informed Federated Learning

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of client drift and poor convergence in federated learning under non-IID data distributions by proposing RI-FedAvg, a novel algorithm that incorporates the Roughness Index (RI)โ€”a measure originally developed for characterizing high-dimensional loss landscapesโ€”into the federated optimization framework. RI-FedAvg leverages the RI to quantify the volatility of local loss functions and constructs an adaptive regularization term that dynamically constrains local updates, thereby mitigating client drift. Theoretical analysis establishes the convergence of RI-FedAvg in non-convex settings, while empirical evaluations on MNIST, CIFAR-10, and CIFAR-100 demonstrate its superior performance over state-of-the-art baselines such as FedAvg, FedProx, FedDyn, and SCAFFOLD, achieving both higher accuracy and faster convergence.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy, yet faces challenges in non-independent and identically distributed (non-IID) settings due to client drift, which impairs convergence. We propose RI-FedAvg, a novel FL algorithm that mitigates client drift by incorporating a Roughness Index (RI)-based regularization term into the local objective, adaptively penalizing updates based on the fluctuations of local loss landscapes. This paper introduces RI-FedAvg, leveraging the RI to quantify the roughness of high-dimensional loss functions, ensuring robust optimization in heterogeneous settings. We provide a rigorous convergence analysis for non-convex objectives, establishing that RI-FedAvg converges to a stationary point under standard assumptions. Extensive experiments on MNIST, CIFAR-10, and CIFAR-100 demonstrate that RI-FedAvg outperforms state-of-the-art baselines, including FedAvg, FedProx, FedDyn, SCAFFOLD, and DP-FedAvg, achieving higher accuracy and faster convergence in non-IID scenarios. Our results highlight RI-FedAvg's potential to enhance the robustness and efficiency of federated learning in practical, heterogeneous environments.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
non-IID
client drift
convergence
heterogeneous settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Roughness Index
Federated Learning
Client Drift
Non-IID
Adaptive Regularization
๐Ÿ”Ž Similar Papers
No similar papers found.