🤖 AI Summary
Traditional horizontal federated edge learning struggles to integrate complementary multi-view information from distributed edge devices. To address this, we propose Vertical Federated Edge Learning (VFEEL), the first VFL framework tailored for edge-aware networks. VFEEL vertically partitions feature spaces across devices and jointly leverages wireless sensing, communication, and computation—employing over-the-air computation (AirComp) for noise-robust aggregation of feature embeddings. By pioneering the integration of vertical federated learning into edge sensing scenarios, VFEEL overcomes constraints imposed by device heterogeneity and channel impairments. We theoretically analyze its convergence under sensing noise and aggregation distortion. Experiments demonstrate that VFEEL significantly reduces training loss while improving model accuracy and robustness. This work establishes a novel paradigm for intelligent, collaborative learning in distributed edge environments.
📝 Abstract
Combining wireless sensing and edge intelligence, edge perception networks enable intelligent data collection and processing at the network edge. However, traditional sample partition based horizontal federated edge learning struggles to effectively fuse complementary multiview information from distributed devices. To address this limitation, we propose a vertical federated edge learning (VFEEL) framework tailored for feature-partitioned sensing data. In this paper, we consider an integrated sensing, communication, and computation-enabled edge perception network, where multiple edge devices utilize wireless signals to sense environmental information for updating their local models, and the edge server aggregates feature embeddings via over-the-air computation for global model training. First, we analyze the convergence behavior of the ISCC-enabled VFEEL in terms of the loss function degradation in the presence of wireless sensing noise and aggregation distortions during AirComp.