Fisher-Informed Parameterwise Aggregation for Federated Learning with Heterogeneous Data

📅 2026-01-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of global models in federated learning under non-IID data, where uniform parameter aggregation often induces client drift. To mitigate this issue, the authors propose a parameter-level weighted aggregation method based on the Fisher information matrix, introducing Fisher information at the parameter level into federated aggregation for the first time. This approach transcends the limitations of conventional client-level scalar weighting, enabling more precise model fusion. By leveraging low-rank approximations for efficient computation of the Fisher matrix, the method maintains communication and computational efficiency while remaining compatible with advanced client-side optimization algorithms. Experimental results demonstrate consistent and significant improvements over standard averaging across diverse tasks—including nonlinear regression, partial differential equation learning, and image classification—yielding higher model accuracy under data heterogeneity.

Technology Category

Application Category

📝 Abstract
Federated learning aggregates model updates from distributed clients, but standard first order methods such as FedAvg apply the same scalar weight to all parameters from each client. Under non-IID data, these uniformly weighted updates can be strongly misaligned across clients, causing client drift and degrading the global model. Here we propose Fisher-Informed Parameterwise Aggregation (FIPA), a second-order aggregation method that replaces client-level scalar weights with parameter-specific Fisher Information Matrix (FIM) weights, enabling true parameter-level scaling that captures how each client's data uniquely influences different parameters. With low-rank approximation, FIPA remains communication- and computation-efficient. Across nonlinear function regression, PDE learning, and image classification, FIPA consistently improves over averaging-based aggregation, and can be effectively combined with state-of-the-art client-side optimization algorithms to further improve image classification accuracy. These results highlight the benefits of FIPA for federated learning under heterogeneous data distributions.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Non-IID Data
Client Drift
Parameter Aggregation
Heterogeneous Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher Information Matrix
parameterwise aggregation
federated learning
heterogeneous data
second-order optimization
🔎 Similar Papers
No similar papers found.