The Sherpa.ai Blind Vertical Federated Learning Paradigm to Minimize the Number of Communications

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vertical federated learning (VFL) faces critical bottlenecks—including high communication overhead, elevated privacy leakage risks, and substantial energy consumption—that hinder its deployment in privacy-sensitive domains such as healthcare and finance. To address these challenges, we propose SBVFL, a novel decentralized VFL paradigm that decouples local model updates at participating nodes from global synchronization at the central server, thereby drastically reducing inter-node communication rounds. SBVFL integrates cryptographic blinding techniques with vertical feature sharding to ensure end-to-end privacy preservation while optimizing collaborative computation. Extensive experiments demonstrate that SBVFL reduces total communication volume by approximately 99% compared to standard VFL, with negligible degradation in model accuracy and robustness. This breakthrough significantly alleviates the communication efficiency bottleneck of VFL, offering a practical, privacy-preserving pathway for efficient federated modeling in high-stakes applications.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative decentralized training across multiple parties (nodes) while keeping raw data private. There are two main paradigms in FL: Horizontal FL (HFL), where all participant nodes share the same feature space but hold different samples, and Vertical FL (VFL), where participants hold complementary features for the same samples. While HFL is widely adopted, VFL is employed in domains where nodes hold complementary features about the same samples. Still, VFL presents a significant limitation: the vast number of communications required during training. This compromises privacy and security, and can lead to high energy consumption, and in some cases, make model training unfeasible due to the high number of communications. In this paper, we introduce Sherpa.ai Blind Vertical Federated Learning (SBVFL), a novel paradigm that leverages a distributed training mechanism enhanced for privacy and security. Decoupling the vast majority of node updates from the server dramatically reduces node-server communication. Experiments show that SBVFL reduces communication by ~99% compared to standard VFL while maintaining accuracy and robustness. Therefore, SBVFL enables practical, privacy-preserving VFL across sensitive domains, including healthcare, finance, manufacturing, aerospace, cybersecurity, and the defense industry.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication overhead in Vertical Federated Learning
Enhancing privacy and security in distributed model training
Maintaining accuracy while minimizing node-server communications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Blind Vertical Federated Learning reduces communications
Distributed training mechanism enhances privacy and security
Decouples node updates to minimize server communication
🔎 Similar Papers
No similar papers found.