🤖 AI Summary
This work addresses key challenges in vertical federated learning—namely high communication overhead, asynchronous participation, and low computational efficiency—by reformulating the problem as a saddle-point optimization within a Lagrangian dual framework. For the first time, this approach systematically integrates communication compression, partial device participation, and coordinate selection strategies. By moving beyond the conventional reliance on pure minimization formulations, the proposed method enables more efficient and flexible collaborative training. Theoretical analysis establishes convergence guarantees for several algorithmic variants, while empirical evaluations demonstrate significant improvements in both training efficiency and model performance.
📝 Abstract
The objective of Vertical Federated Learning (VFL) is to collectively train a model using features available on different devices while sharing the same users. This paper focuses on the saddle point reformulation of the VFL problem via the classical Lagrangian function. We first demonstrate how this formulation can be solved using deterministic methods. More importantly, we explore various stochastic modifications to adapt to practical scenarios, such as employing compression techniques for efficient information transmission, enabling partial participation for asynchronous communication, and utilizing coordinate selection for faster local computation. We show that the saddle point reformulation plays a key role and opens up possibilities to use mentioned extension that seem to be impossible in the standard minimization formulation. Convergence estimates are provided for each algorithm, demonstrating their effectiveness in addressing the VFL problem. Additionally, alternative reformulations are investigated, and numerical experiments are conducted to validate performance and effectiveness of the proposed approach.