🤖 AI Summary
Vertical federated learning (VFL) faces the challenge of simultaneously protecting both feature-level and sample-level privacy—a gap unaddressed by existing approaches.
Method: We propose the first efficient VFL framework jointly guaranteeing both privacy notions. Our approach innovatively integrates secure multi-party computation (SMPC) with the Poisson Binomial Mechanism (PBM), formally defines “feature privacy” for the first time, and establishes a theoretical trade-off among privacy budget, model convergence error, and communication cost. We further reveal a fundamental distinction in sample privacy loss between VFL and horizontal FL (HFL) via rigorous analysis.
Results: Extensive experiments demonstrate that our framework achieves high model accuracy under strong differential privacy guarantees, consistently outperforming state-of-the-art DP-VFL methods across multiple benchmark datasets. Moreover, it reduces communication overhead by 40%–65%, striking a superior balance among privacy, utility, and efficiency.
📝 Abstract
We present Poisson Binomial Mechanism Vertical Federated Learning (PBM-VFL), a communication-efficient Vertical Federated Learning algorithm with Differential Privacy guarantees. PBM-VFL combines Secure Multi-Party Computation with the recently introduced Poisson Binomial Mechanism to protect parties' private datasets during model training. We define the novel concept of feature privacy and analyze end-to-end feature and sample privacy of our algorithm. We compare sample privacy loss in VFL with privacy loss in HFL. We also provide the first theoretical characterization of the relationship between privacy budget, convergence error, and communication cost in differentially-private VFL. Finally, we empirically show that our model performs well with high levels of privacy.