QaRL: Rollout-Aligned Quantization-Aware RL for Fast and Stable Training under Training--Inference Mismatch

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability in reinforcement learning caused by the discrepancy between low-precision rollouts and full-precision training, which leads to optimization difficulties and degraded generation quality. To mitigate this issue, the authors propose QaRL, a quantization-aware reinforcement learning framework that aligns forward computation with quantized rollouts to reduce the training-inference gap. Additionally, they introduce TBPO, a sequence-level optimization algorithm featuring a dual-clipping trust band mechanism to suppress erroneous token generation. The proposed approach effectively alleviates repetition and garbled text in long-form generation, achieving a 5.5-point improvement over the baseline on mathematical reasoning tasks using the Qwen3-30B-A3B MoE model while maintaining training stability and enabling efficient low-bit inference throughput.
📝 Abstract
Large language model (LLM) reinforcement learning (RL) pipelines are often bottlenecked by rollout generation, making end-to-end training slow. Recent work mitigates this by running rollouts with quantization to accelerate decoding, which is the most expensive stage of the RL loop. However, these setups destabilize optimization by amplifying the training-inference gap: rollouts are operated at low precision, while learning updates are computed at full precision. To address this challenge, we propose QaRL (Rollout Alignment Quantization-Aware RL), which aligns training-side forward with the quantized rollout to minimize mismatch. We further identify a failure mode in quantized rollouts: long-form responses tend to produce repetitive, garbled tokens (error tokens). To mitigate these problems, we introduce TBPO (Trust-Band Policy Optimization), a sequence-level objective with dual clipping for negative samples, aimed at keeping updates within the trust region. On Qwen3-30B-A3B MoE for math problems, QaRL outperforms quantized-rollout training by +5.5 while improving stability and preserving low-bit throughput benefits.
Problem

Research questions and friction points this paper is trying to address.

training-inference mismatch
quantized rollout
optimization instability
error tokens
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantization-Aware Reinforcement Learning
Rollout Alignment
Training-Inference Mismatch
Trust-Band Policy Optimization
Low-Bit LLM Training
🔎 Similar Papers
No similar papers found.
Hao Gu
Hao Gu
Sun Yat-Sen University
Planetary aeronomyAtmospheric escapeSpace physics
Hao Wang
Hao Wang
City University of Hong Kong
Deep Reinforcement LearningMobile Crowdsourcing
Jiacheng Liu
Jiacheng Liu
HKUST
Lujun Li
Lujun Li
HKUST
Efficient Machine LearningLarge Language Models
Q
Qiyuan Zhu
The Hong Kong University of Science and Technology
Bei Liu
Bei Liu
Postdoc at HKUST
Speech ProcessingLarge Language ModelsEfficient AIModel Compression
B
Binxing Xu
Zhejiang University
L
Lei Wang
The Hong Kong University of Science and Technology
X
Xintong Yang
The Hong Kong University of Science and Technology
S
Sida Lin
The Hong Kong University of Science and Technology
Sirui Han
Sirui Han
The Hong Kong University of Science and Technology
Large Language ModelInterdisciplinary Artificial Intelligence
Y
Yike Guo
The Hong Kong University of Science and Technology