Heterogeneous Federated Learning Systems for Time-Series Power Consumption Prediction with Multi-Head Embedding Mechanism

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data privacy and statistical heterogeneity in multi-client electricity consumption time-series forecasting, this paper proposes a novel multi-head heterogeneous federated learning framework. The method introduces a first-of-its-kind multi-head independent federated carrier architecture, incorporating two-dimensional head-network embedding mapping and a similarity-based dynamic source-network selection mechanism, jointly optimized with federated knowledge distillation and heterogeneity-aware adaptation strategies to enable efficient cross-device knowledge transfer. Compared against FedAvg and state-of-the-art baselines, the proposed approach reduces prediction error by 24.9%–94.1%. Ablation studies confirm that the two-dimensional embedding and dynamic source-network selection are the primary drivers of performance gains. This work establishes a scalable, high-accuracy federated learning paradigm tailored for privacy-sensitive, device-heterogeneous time-series forecasting tasks.

Technology Category

Application Category

📝 Abstract
Time-series prediction is increasingly popular in a variety of applications, such as smart factories and smart transportation. Researchers have used various techniques to predict power consumption, but existing models lack discussion of collaborative learning and privacy issues among multiple clients. To address these issues, we propose Multi-Head Heterogeneous Federated Learning (MHHFL) systems that consist of multiple head networks, which independently act as carriers for federated learning. In the federated period, each head network is embedded into 2-dimensional vectors and shared with the centralized source pool. MHHFL then selects appropriate source networks and blends the head networks as knowledge transfer in federated learning. The experimental results show that the proposed MHHFL systems significantly outperform the benchmark and state-of-the-art systems and reduce the prediction error by 24.9% to 94.1%. The ablation studies demonstrate the effectiveness of the proposed mechanisms in the MHHFL (head network embedding and selection mechanisms), which significantly outperforms traditional federated average and random transfer.
Problem

Research questions and friction points this paper is trying to address.

Privacy Preservation
Multi-Party Learning
Electricity Usage Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

MHHFL
Multi-head Embedding
Privacy-preserving Learning
J
Jia-Hao Syu
Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
J
Jerry Chun-Wei Lin
Department of Distributed Systems and Informatic Devices, Silesian University of Technology, Poland
Gautam Srivastava
Gautam Srivastava
Brandon University
cryptographysecurity and privacyblockchain technologydata miningInternet of Things
Unil Yun
Unil Yun
Sejong University
Data MiningPattern AnalysisData AnalyticsBig Data ProcessingData Management