Breaking the Prototype Bias Loop: Confidence-Aware Federated Contrastive Learning for Highly Imbalanced Clients

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of prototype bias accumulation in federated contrastive learning caused by severe class imbalance and data heterogeneity across clients. To mitigate this, the authors propose a confidence-aware federated contrastive learning framework that integrates a confidence-aware prototype aggregation mechanism based on predictive uncertainty, minority-class generative data augmentation, and geometric consistency regularization. This approach effectively reduces the variance of prototype estimation and ensures algorithmic convergence. Experimental results demonstrate that the proposed method significantly outperforms existing federated learning baselines under various non-IID and class-imbalanced settings, achieving notable improvements in both overall accuracy and client-wise fairness.

Technology Category

Application Category

📝 Abstract
Local class imbalance and data heterogeneity across clients often trap prototype-based federated contrastive learning in a prototype bias loop: biased local prototypes induced by imbalanced data are aggregated into biased global prototypes, which are repeatedly reused as contrastive anchors, accumulating errors across communication rounds. To break this loop, we propose Confidence-Aware Federated Contrastive Learning (CAFedCL), a novel framework that improves the prototype aggregation mechanism and strengthens the contrastive alignment guided by prototypes. CAFedCL employs a confidence-aware aggregation mechanism that leverages predictive uncertainty to downweight high-variance local prototypes. In addition, generative augmentation for minority classes and geometric consistency regularization are integrated to stabilize the structure between classes. From a theoretical perspective, we provide an expectation-based analysis showing that our aggregation reduces estimation variance, thereby bounding global prototype drift and ensuring convergence. Extensive experiments under varying levels of class imbalance and data heterogeneity demonstrate that CAFedCL consistently outperforms representative federated baselines in both accuracy and client fairness.
Problem

Research questions and friction points this paper is trying to address.

prototype bias
federated learning
class imbalance
data heterogeneity
contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Contrastive Learning
Prototype Bias
Confidence-Aware Aggregation
Class Imbalance
Geometric Consistency Regularization
🔎 Similar Papers
No similar papers found.
T
Tian-Shuang Wu
Key Laboratory of Water Big Data Technology of Ministry of Water Resources, College of Computer Science and Software Engineering, Hohai University, Nanjing, China
Shen-Huan Lyu
Shen-Huan Lyu
Hohai University
Artificial IntelligenceMachine LearningData Mining
N
Ning Chen
Key Laboratory of Water Big Data Technology of Ministry of Water Resources, College of Computer Science and Software Engineering, Hohai University, Nanjing, China
Yi-Xiao He
Yi-Xiao He
Nanjing University of Chinese Medicine
Machine LearningData Mining
B
Bing Tang
Key Laboratory of Water Big Data Technology of Ministry of Water Resources, College of Computer Science and Software Engineering, Hohai University, Nanjing, China
Baoliu Ye
Baoliu Ye
Associate Professor of Computer Science, Nanjing University, China
Wireless NetworkDistributed Computing
Qingfu Zhang
Qingfu Zhang
Chair Professor, FIEEE, City University of Hong Kong
evolutionary computationmultiobjective optimizationcomputational intelligence