TimeRL: Efficient Deep Reinforcement Learning with Polyhedral Dependence Graphs

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL) systems suffer from insufficient optimization in handling dynamic tensor shapes, runtime data dependencies, and memory scheduling—leading to computational redundancy, low execution efficiency, and excessive GPU memory consumption. To address these challenges, this paper introduces a declarative modeling framework based on loop tensors and proposes the first dynamic symbolic polyhedral dependency graph (PDG), which unifies the representation of dynamic control flow and data dependencies for whole-program optimization. Our approach integrates loop-tensor programming, automatic vectorization, incremental computation, operator fusion, and cross-device buffer donation to enable fine-grained intelligent memory scheduling and computational reuse. Experimental evaluation demonstrates that, compared to state-of-the-art DRL systems, our method achieves up to 47× faster training and reduces peak GPU memory usage to 1/16, significantly enhancing expressiveness, optimization efficacy, and hardware utilization for dynamic DRL programs.

Technology Category

Application Category

📝 Abstract
Modern deep learning (DL) workloads increasingly use complex deep reinforcement learning (DRL) algorithms that generate training data within the learning loop. This results in programs with several nested loops and dynamic data dependencies between tensors. While DL systems with eager execution support such dynamism, they lack the optimizations and smart scheduling of graph-based execution. Graph-based execution, however, cannot express dynamic tensor shapes, instead requiring the use of multiple static subgraphs. Either execution model for DRL thus leads to redundant computation, reduced parallelism, and less efficient memory management. We describe TimeRL, a system for executing dynamic DRL programs that combines the dynamism of eager execution with the whole-program optimizations and scheduling of graph-based execution. TimeRL achieves this by introducing the declarative programming model of recurrent tensors, which allows users to define dynamic dependencies as intuitive recurrence equations. TimeRL translates recurrent tensors into a polyhedral dependence graph (PDG) with dynamic dependencies as symbolic expressions. Through simple PDG transformations, TimeRL applies whole-program optimizations, such as automatic vectorization, incrementalization, and operator fusion. The PDG also allows for the computation of an efficient program-wide execution schedule, which decides on buffer deallocations, buffer donations, and GPU/CPU memory swapping. We show that TimeRL executes current DRL algorithms up to 47$ imes$ faster than existing DRL systems, while using 16$ imes$ less GPU peak memory.
Problem

Research questions and friction points this paper is trying to address.

Deep Reinforcement Learning
Dynamic Data Shape
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

TimeRL System
Recurrent Tensors
Polyhedral Dependency Graphs
🔎 Similar Papers
No similar papers found.