Federated Learning of Binary Neural Networks: Enabling Low-Cost Inference

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deploying conventional deep neural networks in resource-constrained edge devices within federated learning settings, where post-training binarization often incurs significant accuracy degradation. To overcome this limitation, the authors propose FedBNN, a novel framework that enables end-to-end training of binary neural networks directly during local client updates. FedBNN introduces, for the first time in federated learning, a rotation-aware weight update mechanism and encodes weights as single-bit ±1 values, substantially reducing memory footprint and computational overhead during inference. Experimental results demonstrate that FedBNN achieves considerable reductions in FLOPs and storage requirements across multiple benchmark datasets while maintaining accuracy comparable to full-precision models, thereby effectively balancing privacy preservation with deployment efficiency on edge devices.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) preserves privacy by distributing training across devices. However, using DNNs is computationally intensive at the low-powered edge during inference. Edge deployment demands models that simultaneously optimize memory footprint and computational efficiency, a dilemma where conventional DNNs fail by exceeding resource limits. Traditional post-training binarization reduces model size but suffers from severe accuracy loss due to quantization errors. To address these challenges, we propose FedBNN, a rotation-aware binary neural network framework that learns binary representations directly during local training. By encoding each weight as a single bit $\{+1, -1\}$ instead of a $32$-bit float, FedBNN shrinks the model footprint, significantly reducing runtime (during inference) FLOPs and memory requirements in comparison to federated methods using real models. Evaluations across multiple benchmark datasets demonstrate that FedBNN significantly reduces resource consumption while performing similarly to existing federated methods using real-valued models.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Binary Neural Networks
Edge Inference
Model Compression
Resource Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Binary Neural Networks
Model Compression
Edge Inference
Rotation-aware Quantization
🔎 Similar Papers
No similar papers found.