FederatedFactory: Generative One-Shot Learning for Extremely Non-IID Distributed Scenarios

๐Ÿ“… 2026-03-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the severe performance degradation in federated learning under extremely label non-IID settings, where conventional methods suffer from conflicting local optimizations and reliance on pretrained models. The authors propose a single-round communication framework that eliminates external dependencies by redefining federated units as generative priors rather than discriminative parameters. Through the exchange of generative modules, the method synthesizes a class-balanced global dataset from scratch, effectively mitigating gradient conflicts and external biases. Notably, this approach achieves the first demonstration of highly non-IID federated learning without any pretraining, while enabling precise modular unlearning. Empirical results show substantial improvements: accuracy on CIFAR-10 rises from 11.36% to 90.57%, and AUROC on ISIC2019 recovers to 90.57%, matching the performance upper bound of centralized training.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Learning (FL) enables distributed optimization without compromising data sovereignty. Yet, where local label distributions are mutually exclusive, standard weight aggregation fails due to conflicting optimization trajectories. Often, FL methods rely on pretrained foundation models, introducing unrealistic assumptions. We introduce FederatedFactory, a zero-dependency framework that inverts the unit of federation from discriminative parameters to generative priors. By exchanging generative modules in a single communication round, our architecture supports ex nihilo synthesis of universally class balanced datasets, eliminating gradient conflict and external prior bias entirely. Evaluations across diverse medical imagery benchmarks, including MedMNIST and ISIC2019, demonstrate that our approach recovers centralized upper-bound performance. Under pathological heterogeneity, it lifts baseline accuracy from a collapsed 11.36% to 90.57% on CIFAR-10 and restores ISIC2019 AUROC to 90.57%. Additionally, this framework facilitates exact modular unlearning through the deterministic deletion of specific generative modules.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Non-IID
One-Shot Learning
Data Heterogeneity
Label Distribution Skew
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated learning
generative one-shot learning
extremely non-IID
generative priors
modular unlearning
๐Ÿ”Ž Similar Papers
No similar papers found.