Decentralized Federated Learning With Energy Harvesting Devices

πŸ“… 2026-02-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of rapid battery depletion in edge devices participating in decentralized federated learning due to high energy consumption. To tackle this issue, the study introduces energy harvesting into the decentralized setting and proposes a fully decentralized policy iteration algorithm that jointly optimizes device scheduling and power control using only local information from two-hop neighborhoods to accelerate model convergence. The approach is formulated as a multi-agent Markov decision process, integrating device-to-device (D2D) communication with energy harvesting systems. Theoretical analysis establishes the algorithm’s convergence and asymptotic optimality. Experimental results on real-world datasets demonstrate that the proposed scheme significantly improves learning efficiency and system sustainability while substantially reducing communication overhead and computational complexity.

Technology Category

Application Category

πŸ“ Abstract
Decentralized federated learning (DFL) enables edge devices to collaboratively train models through local training and fully decentralized device-to-device (D2D) model exchanges. However, these energy-intensive operations often rapidly deplete limited device batteries, reducing their operational lifetime and degrading the learning performance. To address this limitation, we apply energy harvesting technique to DFL systems, allowing edge devices to extract ambient energy and operate sustainably. We first derive the convergence bound for wireless DFL with energy harvesting, showing that the convergence is influenced by partial device participation and transmission packet drops, both of which further depend on the available energy supply. To accelerate convergence, we formulate a joint device scheduling and power control problem and model it as a multi-agent Markov decision process (MDP). Traditional MDP algorithms (e.g., value or policy iteration) require a centralized coordinator with access to all device states and exhibit exponential complexity in the number of devices, making them impractical for large-scale decentralized networks. To overcome these challenges, we propose a fully decentralized policy iteration algorithm that leverages only local state information from two-hop neighboring devices, thereby substantially reducing both communication overhead and computational complexity. We further provide a theoretical analysis showing that the proposed decentralized algorithm achieves asymptotic optimality. Finally, comprehensive numerical experiments on real-world datasets are conducted to validate the theoretical results and corroborate the effectiveness of the proposed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Decentralized Federated Learning
Energy Harvesting
Device Participation
Convergence
Resource Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Federated Learning
Energy Harvesting
Multi-agent MDP
Decentralized Policy Iteration
Convergence Analysis
πŸ”Ž Similar Papers
No similar papers found.