FedOC: Optimizing Global Prototypes with Orthogonality Constraints for Enhancing Embeddings Separation in Heterogeneous Federated Learning

πŸ“… 2025-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In heterogeneous federated learning (HtFL), statistical and model heterogeneity jointly impede effective global prototype formation and yield poorly discriminative embeddings. To address this, we propose Orthogonal Prototype-Guided Embedding Alignment (OPGEA), the first method to introduce orthogonality constraints into prototype optimization. OPGEA enforces pairwise orthogonality among class prototypes, theoretically guarantees convergence under non-convex optimization, and jointly optimizes prototype-guided directional alignment with cross-entropy lossβ€”thereby enhancing intra-class embedding similarity and inter-class angular separation simultaneously. Extensive experiments across multiple heterogeneous settings demonstrate that OPGEA achieves up to a 10.12% absolute accuracy improvement over state-of-the-art baselines, outperforming seven leading HtFL methods. Our work establishes a novel representation learning paradigm for HtFL grounded in geometrically structured prototype optimization.

Technology Category

Application Category

πŸ“ Abstract
Federated Learning (FL) has emerged as an essential framework for distributed machine learning, especially with its potential for privacy-preserving data processing. However, existing FL frameworks struggle to address statistical and model heterogeneity, which severely impacts model performance. While Heterogeneous Federated Learning (HtFL) introduces prototype-based strategies to address the challenges, current approaches face limitations in achieving optimal separation of prototypes. This paper presents FedOC, a novel HtFL algorithm designed to improve global prototype separation through orthogonality constraints, which not only increase intra-class prototype similarity but also significantly expand the inter-class angular separation. With the guidance of the global prototype, each client keeps its embeddings aligned with the corresponding prototype in the feature space, promoting directional independence that integrates seamlessly with the cross-entropy (CE) loss. We provide theoretical proof of FedOC's convergence under non-convex conditions. Extensive experiments demonstrate that FedOC outperforms seven state-of-the-art baselines, achieving up to a 10.12% accuracy improvement in both statistical and model heterogeneity settings.
Problem

Research questions and friction points this paper is trying to address.

Improves global prototype separation in HtFL
Addresses statistical and model heterogeneity in FL
Enhances embedding separation using orthogonality constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonality constraints enhance prototype separation.
Global prototype guides client embeddings alignment.
FedOC improves accuracy in heterogeneous settings.
πŸ”Ž Similar Papers
No similar papers found.
F
Fucheng Guo
Shenzhen International Graduate School, Tsinghua University
Zeyu Luan
Zeyu Luan
Pengcheng Laboratory
Traffic EngineeringProgrammable Data PlaneAI for Network Optimization
Q
Qing Li
Peng Cheng Laboratory, China
D
Dan Zhao
Peng Cheng Laboratory, China
Y
Yong Jiang
Shenzhen International Graduate School, Tsinghua University