🤖 AI Summary
This work addresses the instability in reinforcement learning caused by the discrepancy between low-precision rollouts and full-precision training, which leads to optimization difficulties and degraded generation quality. To mitigate this issue, the authors propose QaRL, a quantization-aware reinforcement learning framework that aligns forward computation with quantized rollouts to reduce the training-inference gap. Additionally, they introduce TBPO, a sequence-level optimization algorithm featuring a dual-clipping trust band mechanism to suppress erroneous token generation. The proposed approach effectively alleviates repetition and garbled text in long-form generation, achieving a 5.5-point improvement over the baseline on mathematical reasoning tasks using the Qwen3-30B-A3B MoE model while maintaining training stability and enabling efficient low-bit inference throughput.
📝 Abstract
Large language model (LLM) reinforcement learning (RL) pipelines are often bottlenecked by rollout generation, making end-to-end training slow. Recent work mitigates this by running rollouts with quantization to accelerate decoding, which is the most expensive stage of the RL loop. However, these setups destabilize optimization by amplifying the training-inference gap: rollouts are operated at low precision, while learning updates are computed at full precision. To address this challenge, we propose QaRL (Rollout Alignment Quantization-Aware RL), which aligns training-side forward with the quantized rollout to minimize mismatch. We further identify a failure mode in quantized rollouts: long-form responses tend to produce repetitive, garbled tokens (error tokens). To mitigate these problems, we introduce TBPO (Trust-Band Policy Optimization), a sequence-level objective with dual clipping for negative samples, aimed at keeping updates within the trust region. On Qwen3-30B-A3B MoE for math problems, QaRL outperforms quantized-rollout training by +5.5 while improving stability and preserving low-bit throughput benefits.