🤖 AI Summary
This work addresses the challenge of task offloading under stringent resource constraints in edge devices by proposing a decentralized deep reinforcement learning (DRL) agent that dynamically selects execution locations—local device, multi-access edge computing (MEC), or cloud—based on real-time conditions. The approach is the first to be deployed and evaluated on a real-world multi-device edge testbed integrated with live 5G communication, enabling a systematic assessment of the trade-offs between latency and energy consumption under local versus remote training paradigms. Experimental results demonstrate the feasibility of running DRL agents directly on end-user devices and quantitatively characterize the impact of different training deployment strategies on system performance, offering practical design guidelines for intelligent edge computing systems.
📝 Abstract
Allowing less capable devices to offload computational tasks to more powerful devices or servers enables the development of new applications that may not run correctly on the device itself. Deciding where and why to run each of those applications is a complex task. Therefore, different approaches have been adopted to make offloading decisions. In this work, we propose a decentralized Deep Reinforcement Learning (DRL) agent to address the selection of computing locations. Unlike most existing work, we analyze it in a real testbed composed of various edge devices running the agent to determine where to execute each task. These devices are connected to a Multi-Access Edge Computing (MEC) server and a Cloud server through 5G communications. We evaluate not only the agent's performance in meeting task requirements but also the implications of running this type of agent locally, assessing the trade-offs of training locally versus remotely in terms of latency and energy consumption.