🤖 AI Summary
Existing deep policy gradient methods rely on large replay buffers, batch updates, or target networks, hindering efficient online incremental learning on resource-constrained real-world robots. To address this, we propose the first purely incremental Action Value Gradient (AVG) algorithm—eliminating both replay buffers and target networks—and performing parameter updates using only single-step transitions. We further introduce online normalization and dynamic scaling compensation to mitigate gradient instability and training collapse under minimal buffering. In simulation benchmarks, AVG matches the performance of leading batch-based methods. Crucially, we present the first empirical validation of stable, efficient deep reinforcement learning with single-step incremental updates on real robotic platforms—including a 7-DOF manipulator and a mobile robot—demonstrating robust online adaptive control under stringent computational and memory constraints. This work establishes a new paradigm for resource-efficient, truly online deep RL in embedded robotic systems.
📝 Abstract
Modern deep policy gradient methods achieve effective performance on simulated robotic tasks, but they all require large replay buffers or expensive batch updates, or both, making them incompatible for real systems with resource-limited computers. We show that these methods fail catastrophically when limited to small replay buffers or during incremental learning, where updates only use the most recent sample without batch updates or a replay buffer. We propose a novel incremental deep policy gradient method -- Action Value Gradient (AVG) and a set of normalization and scaling techniques to address the challenges of instability in incremental learning. On robotic simulation benchmarks, we show that AVG is the only incremental method that learns effectively, often achieving final performance comparable to batch policy gradient methods. This advancement enabled us to show for the first time effective deep reinforcement learning with real robots using only incremental updates, employing a robotic manipulator and a mobile robot.