OmniFed: A Modular Framework for Configurable Federated Learning from Edge to HPC

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inflexibility of deploying federated learning (FL) in heterogeneous environments—such as edge and high-performance computing—and the inherent trade-off between privacy preservation and system performance, this paper proposes a modular, configurable FL framework. Built upon a microkernel architecture, the framework decouples configuration management, orchestration, communication, and training logic. It natively supports diverse network topologies, coexistence of hybrid communication protocols, pluggable privacy-enhancing mechanisms—including differential privacy, homomorphic encryption, and secure aggregation—as well as model compression strategies. A unified extension interface enables dual-mode customization: configuration-driven setup and code-level override. Experimental evaluation demonstrates the framework’s efficiency and cross-environment compatibility across multiple models, algorithms, and deployment scenarios. It significantly reduces deployment complexity while enhancing system flexibility, scalability, and the joint optimization of privacy guarantees and computational performance.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is critical for edge and High Performance Computing (HPC) where data is not centralized and privacy is crucial. We present OmniFed, a modular framework designed around decoupling and clear separation of concerns for configuration, orchestration, communication, and training logic. Its architecture supports configuration-driven prototyping and code-level override-what-you-need customization. We also support different topologies, mixed communication protocols within a single deployment, and popular training algorithms. It also offers optional privacy mechanisms including Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Aggregation (SA), as well as compression strategies. These capabilities are exposed through well-defined extension points, allowing users to customize topology and orchestration, learning logic, and privacy/compression plugins, all while preserving the integrity of the core system. We evaluate multiple models and algorithms to measure various performance metrics. By unifying topology configuration, mixed-protocol communication, and pluggable modules in one stack, OmniFed streamlines FL deployment across heterogeneous environments. Github repository is available at https://github.com/at-aaims/OmniFed.
Problem

Research questions and friction points this paper is trying to address.

Developing a modular framework for federated learning across edge and HPC environments
Supporting configurable topologies, mixed communication protocols, and privacy mechanisms
Enabling customization of training logic while preserving core system integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework with decoupled architecture
Supports mixed communication protocols and topologies
Pluggable privacy mechanisms and compression strategies
🔎 Similar Papers
No similar papers found.