🤖 AI Summary
To address the coexisting challenges of high waiting latency, uneven resource utilization, and load congestion in vehicular edge computing (VEC) task offloading, this paper proposes a task-coordinated offloading mechanism based on parallel computation queues. Methodologically, it introduces: (i) an instantaneous edge server processing capacity prediction model integrated with discrete queue-state modeling to dynamically and accurately identify overloaded nodes; and (ii) a network-coordinated parallel queue scheduling strategy that jointly optimizes latency reduction and global load balancing. Theoretical analysis leverages queuing theory and parallel computation models, while simulations are conducted in a virtual environment driven by real-world road topology. Results demonstrate that the proposed scheme reduces average waiting latency by 21.6%–34.8% compared to state-of-the-art approaches, while maintaining stable robustness under highly dynamic vehicular traffic conditions.
📝 Abstract
This work considers a parallel task execution strategy in vehicular edge computing (VEC) networks, where edge servers are deployed along the roadside to process offloaded computational tasks of vehicular users. To minimize the overall waiting delay among vehicular users, a novel task offloading solution is implemented based on the network cooperation balancing resource under-utilization and load congestion. Dual evaluation through theoretical and numerical ways shows that the developed solution achieves a globally optimal delay reduction performance compared to existing methods, which is also approved by the feasibility test over a real-map virtual environment. The in-depth analysis reveals that predicting the instantaneous processing power of edge servers facilitates the identification of overloaded servers, which is critical for determining network delay. By considering discrete variables of the queue, the proposed technique's precise estimation can effectively address these combinatorial challenges to achieve optimal performance.