Minimizing Layerwise Activation Norm Improves Generalization in Federated Learning

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), global models often converge to sharp minima, degrading generalization performance. To address this, we propose a novel flatness-aware optimization framework grounded in explicit flatness regularization. Our key contributions are: (1) Layer-wise Activation Norm (LAN) regularization—a theoretically grounded technique that provably reduces the spectral radius of the layer-wise Hessian of client loss functions; and (2) a flatness-constrained optimization objective using the largest Hessian eigenvalue as a surrogate, guiding the model toward flatter minima. The method integrates seamlessly into mainstream FL algorithms, enabling distributed flatness-aware optimization during both local training and server-side aggregation. Extensive experiments across multiple benchmark FL tasks demonstrate that LAN consistently improves generalization—achieving state-of-the-art performance without increasing communication or computational overhead.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is an emerging machine learning framework that enables multiple clients (coordinated by a server) to collaboratively train a global model by aggregating the locally trained models without sharing any client's training data. It has been observed in recent works that learning in a federated manner may lead the aggregated global model to converge to a 'sharp minimum' thereby adversely affecting the generalizability of this FL-trained model. Therefore, in this work, we aim to improve the generalization performance of models trained in a federated setup by introducing a 'flatness' constrained FL optimization problem. This flatness constraint is imposed on the top eigenvalue of the Hessian computed from the training loss. As each client trains a model on its local data, we further re-formulate this complex problem utilizing the client loss functions and propose a new computationally efficient regularization technique, dubbed 'MAN,' which Minimizes Activation's Norm of each layer on client-side models. We also theoretically show that minimizing the activation norm reduces the top eigenvalue of the layer-wise Hessian of the client's loss, which in turn decreases the overall Hessian's top eigenvalue, ensuring convergence to a flat minimum. We apply our proposed flatness-constrained optimization to the existing FL techniques and obtain significant improvements, thereby establishing new state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Improves generalization in federated learning by minimizing activation norms
Addresses sharp minima convergence in aggregated global models
Introduces flatness constraint via Hessian eigenvalue regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Minimizes activation norm per layer
Reduces Hessian eigenvalue for flat minima
Enhances generalization in federated learning
🔎 Similar Papers
No similar papers found.