Mixture of Predefined Experts: Maximizing Data Usage on Vertical Federated Learning

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Vertical Federated Learning (VFL) has emerged as a critical paradigm for collaborative model training in privacy-sensitive domains such as finance and healthcare. However, most existing VFL frameworks rely on the idealized assumption of full sample alignment across participants, a premise that rarely holds in real-world scenarios. To bridge this gap, this work introduces Split-MoPE, a novel framework that integrates Split Learning with a specialized Mixture of Predefined Experts (MoPE) architecture. Unlike standard Mixture of Experts (MoE), where routing is learned dynamically, MoPE uses predefined experts to process specific data alignments, effectively maximizing data usage during both training and inference without requiring full sample overlap. By leveraging pretrained encoders for target data domains, Split-MoPE achieves state-of-the-art performance in a single communication round, significantly reducing the communication footprint compared to multi-round end-to-end training. Furthermore, unlike existing proposals that address sample misalignment, this novel architecture provides inherent robustness against malicious or noisy participants and offers per-sample interpretability by quantifying each collaborator's contribution to each prediction. Extensive evaluations on vision (CIFAR-10/100) and tabular (Breast Cancer Wisconsin) datasets demonstrate that Split-MoPE consistently outperforms state-of-the-art systems such as LASER and Vertical SplitNN, particularly in challenging scenarios with high data missingness.
Problem

Research questions and friction points this paper is trying to address.

Vertical Federated Learning
Sample Alignment
Data Missingness
Mixture of Experts
Privacy-Preserving Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vertical Federated Learning
Mixture of Predefined Experts
Split Learning
Sample Misalignment
Communication Efficiency
🔎 Similar Papers
No similar papers found.
J
Jon Irureta
Ikerlan Technology Research Center, Arrasate, Spain; University of the Basque Country UPV/EHU, Spain
Gorka Azkune
Gorka Azkune
Associate Professor at University of the Basque Country (UPV/EHU)
Natural language processingComputer visionDeep learningArtificial intelligence
J
Jon Imaz
Ikerlan Technology Research Center, Arrasate, Spain
A
Aizea Lojo
Ikerlan Technology Research Center, Arrasate, Spain
Javier Fernandez-Marques
Javier Fernandez-Marques
Research Scientist, FlowerLabs
Federated LearningEfficient MLEmbedded Systems