🤖 AI Summary
To address excessive computational and communication overhead in vertical federated learning (VFL) for smart-building IoT systems, this paper proposes LVFL—a lightweight VFL framework that introduces the first systematic joint light-weighting paradigm. LVFL enhances local computational efficiency via feature-model pruning and quantization, while reducing communication load through low-dimensional feature embedding mapping and error-compensated gradient compression. Theoretically, we derive a convergence upper bound that jointly accounts for both computational and communication compression ratios. Empirically, on image classification tasks, LVFL achieves over 60% reduction in both computation and communication costs, with less than 1.2% accuracy degradation, while preserving strong generalization performance.
📝 Abstract
The exploration of computational and communication efficiency within Federated Learning (FL) has emerged as a prominent and crucial field of study. While most existing efforts to enhance these efficiencies have focused on Horizontal FL, the distinct processes and model structures of Vertical FL preclude the direct application of Horizontal FL-based techniques. In response, we introduce the concept of Lightweight Vertical Federated Learning (LVFL), targeting both computational and communication efficiencies. This approach involves separate lightweighting strategies for the feature model, to improve computational efficiency, and for feature embedding, to enhance communication efficiency. Moreover, we establish a convergence bound for our LVFL algorithm, which accounts for both communication and computational lightweighting ratios. Our evaluation of the algorithm on a image classification dataset reveals that LVFL significantly alleviates computational and communication demands while preserving robust learning performance. This work effectively addresses the gaps in communication and computational efficiency within Vertical FL.