SCOPE: Semantic Coreset with Orthogonal Projection Embeddings for Federated learning

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and excessive communication overhead in federated learning caused by extreme class imbalance and data heterogeneity. It proposes the first semantic coreset framework tailored for federated learning, which leverages orthogonal projection embeddings to dynamically select high-quality samples in the local latent space based on a tripartite scoring criterion—representativeness, diversity, and boundary proximity. To ensure global consistency while minimizing communication, clients upload only scalar metrics rather than raw or embedded data. Experimental results demonstrate that the proposed method achieves competitive accuracy and robust convergence while reducing uplink bandwidth by 128×–512×, accelerating training by 7.72×, and substantially lowering local computational and GPU memory requirements.

Technology Category

Application Category

📝 Abstract
Scientific discovery increasingly requires learning on federated datasets, fed by streams from high-resolution instruments, that have extreme class imbalance. Current ML approaches either require impractical data aggregation or fail due to class imbalance. Existing coreset selection methods rely on local heuristics, making them unaware of the global data landscape and prone to sub-optimal and non-representative pruning. To overcome these challenges, we introduce SCOPE (Semantic Coreset using Orthogonal Projection Embeddings for Federated learning), a coreset framework for federated data that filters anomalies and adaptively prunes redundant data to mitigate long-tail skew. By analyzing the latent space distribution, we score each data point using a representation score that measures the reliability of core class features, a diversity score that quantifies the novelty of orthogonal residuals, and a boundary proximity score that indicates similarity to competing classes. Unlike prior methods, SCOPE shares only scalar metrics with a federated server to construct a global consensus, ensuring communication efficiency. Guided by the global consensus, SCOPE dynamically filters local noise and discards redundant samples to counteract global feature skews. Extensive experiments demonstrate that SCOPE yields competitive global accuracy and robust convergence, all while achieving exceptional efficiency with a 128x to 512x reduction in uplink bandwidth, a 7.72x wall-clock acceleration and reduced FLOP and VRAM footprints for local coreset selection.
Problem

Research questions and friction points this paper is trying to address.

federated learning
class imbalance
coreset selection
data heterogeneity
communication efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated learning
coreset selection
class imbalance
orthogonal projection embeddings
communication efficiency
🔎 Similar Papers
No similar papers found.