🤖 AI Summary
Existing deep reinforcement learning (DRL) methods struggle with computational resource constraints and Q-value overestimation bias in edge-computing scenarios involving resource-limited, privacy-sensitive continuous control tasks. To address these challenges, we propose EdgeD3—a lightweight, edge-aware DRL algorithm built upon the DDPG framework. EdgeD3 integrates three key innovations: delayed target network updates, twin Q-function ensemble evaluation, and an edge-aware parameter freezing mechanism—enabling the first efficient, hardware-constrained model reconstruction tailored for edge deployment. This design effectively mitigates overestimation bias and enhances policy stability. Experimental results across multiple continuous control benchmarks demonstrate that EdgeD3 reduces GPU inference time by 25% and memory footprint by 30%, while matching or surpassing state-of-the-art methods in performance. These outcomes validate EdgeD3’s computational efficiency, robustness, and practical deployability on edge devices.
📝 Abstract
Deep Reinforcement Learning is gaining increasing attention thanks to its capability to learn complex policies in high-dimensional settings. Recent advancements utilize a dual-network architecture to learn optimal policies through the Q-learning algorithm. However, this approach has notable drawbacks, such as an overestimation bias that can disrupt the learning process and degrade the performance of the resulting policy. To address this, novel algorithms have been developed that mitigate overestimation bias by employing multiple Q-functions. Edge scenarios, which prioritize privacy, have recently gained prominence. In these settings, limited computational resources pose a significant challenge for complex Machine Learning approaches, making the efficiency of algorithms crucial for their performance. In this work, we introduce a novel Reinforcement Learning algorithm tailored for edge scenarios, called Edge Delayed Deep Deterministic Policy Gradient (EdgeD3). EdgeD3 enhances the Deep Deterministic Policy Gradient (DDPG) algorithm, achieving significantly improved performance with $25%$ less Graphics Process Unit (GPU) time while maintaining the same memory usage. Additionally, EdgeD3 consistently matches or surpasses the performance of state-of-the-art methods across various benchmarks, all while using $30%$ fewer computational resources and requiring $30%$ less memory.