🤖 AI Summary
To address the challenges of high mobility, resource constraints, and non-IID data distribution in vehicular edge intelligence (VEI)—which collectively lead to excessive communication overhead, slow model convergence, and inadequate privacy protection—this paper proposes an adaptive parallel split federated learning (SFL) architecture. The architecture introduces a novel dynamic model partitioning and computational load co-scheduling mechanism, enabling heterogeneous vehicular devices to perform lightweight, low-latency training under edge coordination. Integrating edge computing, distributed optimization, and privacy-preserving design, the proposed framework reduces uplink communication overhead by 47% and accelerates model convergence by 2.3× compared to baseline approaches. Extensive experiments on real-world vehicular trajectory datasets validate its effectiveness and robustness under dynamic network conditions.
📝 Abstract
To realize ubiquitous intelligence of future vehicular networks, artificial intelligence (AI) is critical since it can mine knowledge from vehicular data to improve the quality of many AI driven vehicular services. By combining AI techniques with vehicular networks, Vehicular Edge Intelligence (VEI) can utilize the computing, storage, and communication resources of vehicles to train the AI models. Nevertheless, when executing the model training, the traditional centralized learning paradigm requires vehicles to upload their raw data to a central server, which results in significant communication overheads and the risk of privacy leakage. In this article, we first overview the system architectures, performance metrics and challenges ahead of VEI design. Then we propose to utilize distribute machine learning scheme, namely split federated learning (SFL), to boost the development of VEI. We present a novel adaptive and parellel SFL scheme and conduct corresponding analysis on its performance. Future research directions are highlighted to shed light on the efficient design of SFL.