GFPL: Generative Federated Prototype Learning for Resource-Constrained and Data-Imbalanced Vision Task

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a generative federated prototype learning framework to address two key challenges in federated learning: model bias toward majority classes caused by data imbalance and high communication overhead due to the transmission of high-dimensional parameters. The approach constructs class-level feature prototypes using Gaussian mixture models and aggregates semantically similar knowledge across clients via Bhattacharyya distance. To mitigate inter-client distributional heterogeneity, it generates synthetic features that augment local representations. A dual-classifier architecture combined with a hybrid loss function—integrating Dot regression and cross-entropy—is employed to optimize local training. Experimental results demonstrate that the proposed method improves accuracy by 3.6% under imbalanced settings while significantly reducing communication costs.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) facilitates the secure utilization of decentralized images, advancing applications in medical image recognition and autonomous driving. However, conventional FL faces two critical challenges in real-world deployment: ineffective knowledge fusion caused by model updates biased toward majority-class features, and prohibitive communication overhead due to frequent transmissions of high-dimensional model parameters. Inspired by the human brain's efficiency in knowledge integration, we propose a novel Generative Federated Prototype Learning (GFPL) framework to address these issues. Within this framework, a prototype generation method based on Gaussian Mixture Model (GMM) captures the statistical information of class-wise features, while a prototype aggregation strategy using Bhattacharyya distance effectively fuses semantically similar knowledge across clients. In addition, these fused prototypes are leveraged to generate pseudo-features, thereby mitigating feature distribution imbalance across clients. To further enhance feature alignment during local training, we devise a dual-classifier architecture, optimized via a hybrid loss combining Dot Regression and Cross-Entropy. Extensive experiments on benchmarks show that GFPL improves model accuracy by 3.6% under imbalanced data settings while maintaining low communication cost.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Data Imbalance
Communication Overhead
Resource-Constrained
Knowledge Fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Federated Prototype Learning
Gaussian Mixture Model
Bhattacharyya distance
feature imbalance
communication efficiency
🔎 Similar Papers
No similar papers found.