🤖 AI Summary
This work addresses the inefficiency of autoregressive decoding in large language models during reinforcement learning training, which can consume up to 70% of total training time due to slow rollouts. To mitigate this bottleneck, the authors propose a quantized Actor approach that integrates INT8/FP8 quantization, adaptive clipping range (ACR), and invariant scaling within the DeepScaleR and DAPO frameworks. The ACR mechanism effectively prevents training instability over extended periods, while invariant scaling reduces sensitivity to minor weight updates caused by quantization. Experimental results demonstrate that the proposed method accelerates rollouts by 20%–80%, substantially improving overall training efficiency without compromising stability or the efficacy of policy updates.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has become a trending paradigm for training reasoning large language models (LLMs). However, due to the autoregressive decoding nature of LLMs, the rollout process becomes the efficiency bottleneck of RL training, consisting of up to 70\% of the total training time. In this work, we propose Quantized Reinforcement Learning (QuRL) that uses a quantized actor for accelerating the rollout. We address two challenges in QuRL. First, we propose Adaptive Clipping Range (ACR) that dynamically adjusts the clipping ratio based on the policy ratio between the full-precision actor and the quantized actor, which is essential for mitigating long-term training collapse. Second, we identify the weight update problem, where weight changes between RL steps are extremely small, making it difficult for the quantization operation to capture them effectively. We mitigate this problem through the invariant scaling technique that reduces quantization noise and increases weight update. We evaluate our method with INT8 and FP8 quantization experiments on DeepScaleR and DAPO, and achieve 20% to 80% faster rollout during training.