Each Prompt Matters: Scaling Reinforcement Learning Without Wasting Rollouts on Hundred-Billion-Scale MoE

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address four critical bottlenecks in RL training of百亿-parameter Mixture-of-Experts (MoE) models—zero-variance prompt waste, instability in long-horizon importance sampling, advantage reversal, and inference throughput limitations—this work proposes the “Every Prompt Matters” paradigm. Methodologically, we design a multi-stage zero-variance elimination mechanism; introduce entropy-adaptive ESPO with router replay; develop a reward model correction module to mitigate advantage reversal; and build a high-throughput RL system featuring FP8 inference, overlapped reward computation, and length-aware scheduling, integrated with GRPO and dynamic token-/sequence-level importance sampling. Evaluated on CompassMax-V3-Thinking, our approach significantly improves training stability and sample efficiency. It achieves state-of-the-art performance in both internal and external benchmarks for inference and generation, demonstrating effectiveness, robustness, and scalability.

Technology Category

Application Category

📝 Abstract
We present CompassMax-V3-Thinking, a hundred-billion-scale MoE reasoning model trained with a new RL framework built on one principle: each prompt must matter. Scaling RL to this size exposes critical inefficiencies-zero-variance prompts that waste rollouts, unstable importance sampling over long horizons, advantage inversion from standard reward models, and systemic bottlenecks in rollout processing. To overcome these challenges, we introduce several unified innovations: (1) Multi-Stage Zero-Variance Elimination, which filters out non-informative prompts and stabilizes group-based policy optimization (e.g. GRPO) by removing wasted rollouts; (2) ESPO, an entropy-adaptive optimization method that balances token-level and sequence-level importance sampling to maintain stable learning dynamics; (3) a Router Replay strategy that aligns training-time MoE router decisions with inference-time behavior to mitigate train-infer discrepancies, coupled with a reward model adjustment to prevent advantage inversion; (4) a high-throughput RL system with FP8-precision rollouts, overlapped reward computation, and length-aware scheduling to eliminate performance bottlenecks. Together, these contributions form a cohesive pipeline that makes RL on hundred-billion-scale MoE models stable and efficient. The resulting model delivers strong performance across both internal and public evaluations.
Problem

Research questions and friction points this paper is trying to address.

Eliminates wasted rollouts from zero-variance prompts in large-scale RL
Stabilizes importance sampling and prevents advantage inversion in training
Resolves systemic bottlenecks in rollout processing for efficient scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Stage Zero-Variance Elimination filters non-informative prompts to stabilize policy optimization
ESPO balances token and sequence importance sampling for stable learning dynamics
Router Replay aligns training and inference MoE router decisions to prevent discrepancies
🔎 Similar Papers
No similar papers found.
A
Anxiang Zeng
Shopee LLM Team
H
Haibo Zhang
Shopee LLM Team
H
Hailing Zhang
Shopee LLM Team
Kaixiang Mo
Kaixiang Mo
Shopee LLM Team
L
Liang Yao
Shopee LLM Team
Ling Hu
Ling Hu
Shopee LLM Team
L
Long Zhang
Shopee LLM Team
S
Shuman Liu
Shopee LLM Team
S
Shuyi Xie
Shopee LLM Team
Y
Yanshi Li
Shopee LLM Team
Y
Yizhang Chen
Shopee LLM Team
Y
Yuepeng Sheng
Shopee LLM Team
Y
Yuwei Huang
Shopee LLM Team
Z
Zhaochen Xu
Shopee LLM Team
Zhiqiang Zhou
Zhiqiang Zhou
Beijing Institute of Technology
Computer VisionInformation Fusion
Z
Ziqin Liew
Shopee LLM Team