π€ AI Summary
This work investigates the computationally optimal scaling of value-based deep reinforcement learning under fixed computational budgets, focusing on how resource allocation between model capacity and the update-to-data ratio (UTD) affects sample efficiency. Through systematic ablation studies, we identifyβ for the first timeβa phenomenon we term *TD overfitting*: increasing batch size degrades Q-function accuracy markedly in small models, while larger models remain robust. This reveals a strong coupling among model scale, batch size, and UTD. Building on this insight, we propose a computation-optimal scaling principle for temporal-difference learning: scaling up model capacity enables higher UTD and larger batch sizes, thereby substantially improving performance per unit compute. Empirical evaluation across diverse tasks demonstrates 2β3Γ gains in sample efficiency, providing both reproducible empirical guidance and theoretical grounding for large-scale RL training.
π Abstract
As models grow larger and training them becomes expensive, it becomes increasingly important to scale training recipes not just to larger models and more data, but to do so in a compute-optimal manner that extracts maximal performance per unit of compute. While such scaling has been well studied for language modeling, reinforcement learning (RL) has received less attention in this regard. In this paper, we investigate compute scaling for online, value-based deep RL. These methods present two primary axes for compute allocation: model capacity and the update-to-data (UTD) ratio. Given a fixed compute budget, we ask: how should resources be partitioned across these axes to maximize sample efficiency? Our analysis reveals a nuanced interplay between model size, batch size, and UTD. In particular, we identify a phenomenon we call TD-overfitting: increasing the batch quickly harms Q-function accuracy for small models, but this effect is absent in large models, enabling effective use of large batch size at scale. We provide a mental model for understanding this phenomenon and build guidelines for choosing batch size and UTD to optimize compute usage. Our findings provide a grounded starting point for compute-optimal scaling in deep RL, mirroring studies in supervised learning but adapted to TD learning.