Edge Delayed Deep Deterministic Policy Gradient: efficient continuous control for edge scenarios

📅 2024-12-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL) methods struggle with computational resource constraints and Q-value overestimation bias in edge-computing scenarios involving resource-limited, privacy-sensitive continuous control tasks. To address these challenges, we propose EdgeD3—a lightweight, edge-aware DRL algorithm built upon the DDPG framework. EdgeD3 integrates three key innovations: delayed target network updates, twin Q-function ensemble evaluation, and an edge-aware parameter freezing mechanism—enabling the first efficient, hardware-constrained model reconstruction tailored for edge deployment. This design effectively mitigates overestimation bias and enhances policy stability. Experimental results across multiple continuous control benchmarks demonstrate that EdgeD3 reduces GPU inference time by 25% and memory footprint by 30%, while matching or surpassing state-of-the-art methods in performance. These outcomes validate EdgeD3’s computational efficiency, robustness, and practical deployability on edge devices.

Technology Category

Application Category

📝 Abstract
Deep Reinforcement Learning is gaining increasing attention thanks to its capability to learn complex policies in high-dimensional settings. Recent advancements utilize a dual-network architecture to learn optimal policies through the Q-learning algorithm. However, this approach has notable drawbacks, such as an overestimation bias that can disrupt the learning process and degrade the performance of the resulting policy. To address this, novel algorithms have been developed that mitigate overestimation bias by employing multiple Q-functions. Edge scenarios, which prioritize privacy, have recently gained prominence. In these settings, limited computational resources pose a significant challenge for complex Machine Learning approaches, making the efficiency of algorithms crucial for their performance. In this work, we introduce a novel Reinforcement Learning algorithm tailored for edge scenarios, called Edge Delayed Deep Deterministic Policy Gradient (EdgeD3). EdgeD3 enhances the Deep Deterministic Policy Gradient (DDPG) algorithm, achieving significantly improved performance with $25%$ less Graphics Process Unit (GPU) time while maintaining the same memory usage. Additionally, EdgeD3 consistently matches or surpasses the performance of state-of-the-art methods across various benchmarks, all while using $30%$ fewer computational resources and requiring $30%$ less memory.
Problem

Research questions and friction points this paper is trying to address.

Mitigates overestimation bias in deep reinforcement learning
Optimizes continuous control for edge computing scenarios
Reduces computational resource usage while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

EdgeD3 enhances DDPG with reduced GPU time
Algorithm uses 30% fewer computational resources
Maintains performance with 30% less memory usage
🔎 Similar Papers
No similar papers found.
Alberto Sinigaglia
Alberto Sinigaglia
PhD student
Deep Reinforcement LearningDeep Learning
N
Niccolo' Turcato
Department of Infomation Engineering, University of Padova, Via Gradenigo 6/B, Padova, 35131, Italy
Ruggero Carli
Ruggero Carli
Associate Professor at University of Padova
Control Theory
G
Gian Antonio Susto
Human-Inspired Technology Research Center, University of Padova, Via Luzzatti, 4, Padova, 35121, Italy; Department of Infomation Engineering, University of Padova, Via Gradenigo 6/B, Padova, 35131, Italy