🤖 AI Summary
Neural network depth in real-time reinforcement learning induces observation latency, compromising timely action execution. Method: We propose a theory-driven lightweight architecture integrating temporal skip connections and history-augmented observation encoding, jointly optimizing low inference latency and high representational capacity under strict action-frequency constraints. Our approach comprises historical state encoding, parallel neuron computation, and multi-granularity temporal modeling. Contribution/Results: Evaluated on MuJoCo and MinAtar benchmarks across diverse RL algorithms, latency configurations, and tasks, the method demonstrates strong robustness and achieves 6–350% inference speedup over baseline models. It provides a scalable, formally verifiable solution for deploying real-time RL in resource-constrained environments.
📝 Abstract
Real-time reinforcement learning (RL) introduces several challenges. First, policies are constrained to a fixed number of actions per second due to hardware limitations. Second, the environment may change while the network is still computing an action, leading to observational delay. The first issue can partly be addressed with pipelining, leading to higher throughput and potentially better policies. However, the second issue remains: if each neuron operates in parallel with an execution time of $ au$, an $N$-layer feed-forward network experiences observation delay of $ au N$. Reducing the number of layers can decrease this delay, but at the cost of the network's expressivity. In this work, we explore the trade-off between minimizing delay and network's expressivity. We present a theoretically motivated solution that leverages temporal skip connections combined with history-augmented observations. We evaluate several architectures and show that those incorporating temporal skip connections achieve strong performance across various neuron execution times, reinforcement learning algorithms, and environments, including four Mujoco tasks and all MinAtar games. Moreover, we demonstrate parallel neuron computation can accelerate inference by 6-350% on standard hardware. Our investigation into temporal skip connections and parallel computations paves the way for more efficient RL agents in real-time setting.