🤖 AI Summary
In federated learning (FL), global models often converge to sharp minima, degrading generalization performance. To address this, we propose a novel flatness-aware optimization framework grounded in explicit flatness regularization. Our key contributions are: (1) Layer-wise Activation Norm (LAN) regularization—a theoretically grounded technique that provably reduces the spectral radius of the layer-wise Hessian of client loss functions; and (2) a flatness-constrained optimization objective using the largest Hessian eigenvalue as a surrogate, guiding the model toward flatter minima. The method integrates seamlessly into mainstream FL algorithms, enabling distributed flatness-aware optimization during both local training and server-side aggregation. Extensive experiments across multiple benchmark FL tasks demonstrate that LAN consistently improves generalization—achieving state-of-the-art performance without increasing communication or computational overhead.
📝 Abstract
Federated Learning (FL) is an emerging machine learning framework that enables multiple clients (coordinated by a server) to collaboratively train a global model by aggregating the locally trained models without sharing any client's training data. It has been observed in recent works that learning in a federated manner may lead the aggregated global model to converge to a 'sharp minimum' thereby adversely affecting the generalizability of this FL-trained model. Therefore, in this work, we aim to improve the generalization performance of models trained in a federated setup by introducing a 'flatness' constrained FL optimization problem. This flatness constraint is imposed on the top eigenvalue of the Hessian computed from the training loss. As each client trains a model on its local data, we further re-formulate this complex problem utilizing the client loss functions and propose a new computationally efficient regularization technique, dubbed 'MAN,' which Minimizes Activation's Norm of each layer on client-side models. We also theoretically show that minimizing the activation norm reduces the top eigenvalue of the layer-wise Hessian of the client's loss, which in turn decreases the overall Hessian's top eigenvalue, ensuring convergence to a flat minimum. We apply our proposed flatness-constrained optimization to the existing FL techniques and obtain significant improvements, thereby establishing new state-of-the-art.