🤖 AI Summary
To address feasibility, performance, and energy efficiency of decentralized federated learning (DFL) on resource-constrained edge devices, this work establishes the first real-hardware DFL testbed using Raspberry Pi and Jetson Nano—eliminating reliance on centralized servers while enabling privacy-preserving collaborative training. Our approach innovatively integrates dynamic power monitoring (via INA219), a lightweight P2P communication protocol, and the NEBULA framework to support distributed training across multiple datasets (CIFAR-10 and FEMNIST) on embedded Linux platforms. Experimental results reveal that communication topology density critically impacts both convergence behavior and energy efficiency: dense topologies improve model accuracy by 3.2% and reduce per-round energy consumption by 18%. The system demonstrates stable operation with up to 16 nodes and achieves end-to-end latency under 850 ms, validating its practicality for edge-deployable DFL.
📝 Abstract
Federated Learning (FL) enables collaborative model training without sharing raw data, preserving participant privacy. Decentralized FL (DFL) eliminates reliance on a central server, mitigating the single point of failure inherent in the traditional FL paradigm, while introducing deployment challenges on resource-constrained devices. To evaluate real-world applicability, this work designs and deploys a physical testbed using edge devices such as Raspberry Pi and Jetson Nano. The testbed is built upon a DFL training platform, NEBULA, and extends it with a power monitoring module to measure energy consumption during training. Experiments across multiple datasets show that model performance is influenced by the communication topology, with denser topologies leading to better outcomes in DFL settings.