π€ AI Summary
In heterogeneous federated learning (HtFL), statistical and model heterogeneity jointly impede effective global prototype formation and yield poorly discriminative embeddings. To address this, we propose Orthogonal Prototype-Guided Embedding Alignment (OPGEA), the first method to introduce orthogonality constraints into prototype optimization. OPGEA enforces pairwise orthogonality among class prototypes, theoretically guarantees convergence under non-convex optimization, and jointly optimizes prototype-guided directional alignment with cross-entropy lossβthereby enhancing intra-class embedding similarity and inter-class angular separation simultaneously. Extensive experiments across multiple heterogeneous settings demonstrate that OPGEA achieves up to a 10.12% absolute accuracy improvement over state-of-the-art baselines, outperforming seven leading HtFL methods. Our work establishes a novel representation learning paradigm for HtFL grounded in geometrically structured prototype optimization.
π Abstract
Federated Learning (FL) has emerged as an essential framework for distributed machine learning, especially with its potential for privacy-preserving data processing. However, existing FL frameworks struggle to address statistical and model heterogeneity, which severely impacts model performance. While Heterogeneous Federated Learning (HtFL) introduces prototype-based strategies to address the challenges, current approaches face limitations in achieving optimal separation of prototypes. This paper presents FedOC, a novel HtFL algorithm designed to improve global prototype separation through orthogonality constraints, which not only increase intra-class prototype similarity but also significantly expand the inter-class angular separation. With the guidance of the global prototype, each client keeps its embeddings aligned with the corresponding prototype in the feature space, promoting directional independence that integrates seamlessly with the cross-entropy (CE) loss. We provide theoretical proof of FedOC's convergence under non-convex conditions. Extensive experiments demonstrate that FedOC outperforms seven state-of-the-art baselines, achieving up to a 10.12% accuracy improvement in both statistical and model heterogeneity settings.