Compute-Optimal Scaling for Value-Based Deep RL

πŸ“… 2025-08-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates the computationally optimal scaling of value-based deep reinforcement learning under fixed computational budgets, focusing on how resource allocation between model capacity and the update-to-data ratio (UTD) affects sample efficiency. Through systematic ablation studies, we identifyβ€” for the first timeβ€”a phenomenon we term *TD overfitting*: increasing batch size degrades Q-function accuracy markedly in small models, while larger models remain robust. This reveals a strong coupling among model scale, batch size, and UTD. Building on this insight, we propose a computation-optimal scaling principle for temporal-difference learning: scaling up model capacity enables higher UTD and larger batch sizes, thereby substantially improving performance per unit compute. Empirical evaluation across diverse tasks demonstrates 2–3Γ— gains in sample efficiency, providing both reproducible empirical guidance and theoretical grounding for large-scale RL training.

Technology Category

Application Category

πŸ“ Abstract
As models grow larger and training them becomes expensive, it becomes increasingly important to scale training recipes not just to larger models and more data, but to do so in a compute-optimal manner that extracts maximal performance per unit of compute. While such scaling has been well studied for language modeling, reinforcement learning (RL) has received less attention in this regard. In this paper, we investigate compute scaling for online, value-based deep RL. These methods present two primary axes for compute allocation: model capacity and the update-to-data (UTD) ratio. Given a fixed compute budget, we ask: how should resources be partitioned across these axes to maximize sample efficiency? Our analysis reveals a nuanced interplay between model size, batch size, and UTD. In particular, we identify a phenomenon we call TD-overfitting: increasing the batch quickly harms Q-function accuracy for small models, but this effect is absent in large models, enabling effective use of large batch size at scale. We provide a mental model for understanding this phenomenon and build guidelines for choosing batch size and UTD to optimize compute usage. Our findings provide a grounded starting point for compute-optimal scaling in deep RL, mirroring studies in supervised learning but adapted to TD learning.
Problem

Research questions and friction points this paper is trying to address.

Optimizing compute allocation between model capacity and update-to-data ratio
Investigating compute-optimal scaling for value-based deep reinforcement learning
Addressing TD-overfitting phenomenon in batch size and model size interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compute-optimal scaling for value-based RL
Balancing model capacity and update-to-data ratio
Large models enable effective large batch usage
πŸ”Ž Similar Papers
No similar papers found.