🤖 AI Summary
In vehicular edge computing (VEC), high vehicle mobility leads to short roadside unit (RSU) residence times, causing task offloading to frequently miss deadlines—resulting in high task drop rates and excessive latency. To address this, this paper proposes an online task offloading framework based on deep reinforcement learning (DRL). It innovatively integrates the Deep Q-Network (DQN) into dynamic vehicular environments and jointly models communication and computation delays to enable end-to-end low-latency decision-making. Compared with conventional particle swarm optimization (PSO), the proposed method reduces execution time by 99.2%, decreases task drop rate by 2.5%, and lowers end-to-end latency by 18.6%. Experimental results demonstrate DQN’s superior real-time responsiveness, robustness against environmental dynamics, and scheduling efficiency. The framework provides a scalable, DRL-based solution for intelligent task offloading in mobile edge computing scenarios.
📝 Abstract
Vehicular Mobile Edge Computing (VEC) drives the future by enabling low-latency, high-efficiency data processing at the very edge of vehicular networks. This drives innovation in key areas such as autonomous driving, intelligent transportation systems, and real-time analytics. Despite its potential, VEC faces significant challenges, particularly in adhering to strict task offloading deadlines, as vehicles remain within the coverage area of Roadside Units (RSUs) for only brief periods. To tackle this challenge, this paper evaluates the performance boundaries of task processing by initially establishing a theoretical limit using Particle Swarm Optimization (PSO) in a static environment. To address more dynamic and practical scenarios, PSO, Deep Q-Network (DQN), and Proximal Policy Optimization (PPO) models are implemented in an online setting. The objective is to minimize dropped tasks and reduce end-to-end (E2E) latency, covering both communication and computation delays. Experimental results demonstrate that the DQN model considerably surpasses the dynamic PSO approach, achieving a 99.2% reduction in execution time. Furthermore, It leads to a reduction in dropped tasks by 2.5% relative to dynamic PSO and achieves 18.6% lower E2E latency, highlighting the effectiveness of Deep Reinforcement Learning (DRL) in enabling scalable and efficient task management for VEC systems.