🤖 AI Summary
To address four critical bottlenecks in RL training of百亿-parameter Mixture-of-Experts (MoE) models—zero-variance prompt waste, instability in long-horizon importance sampling, advantage reversal, and inference throughput limitations—this work proposes the “Every Prompt Matters” paradigm. Methodologically, we design a multi-stage zero-variance elimination mechanism; introduce entropy-adaptive ESPO with router replay; develop a reward model correction module to mitigate advantage reversal; and build a high-throughput RL system featuring FP8 inference, overlapped reward computation, and length-aware scheduling, integrated with GRPO and dynamic token-/sequence-level importance sampling. Evaluated on CompassMax-V3-Thinking, our approach significantly improves training stability and sample efficiency. It achieves state-of-the-art performance in both internal and external benchmarks for inference and generation, demonstrating effectiveness, robustness, and scalability.
📝 Abstract
We present CompassMax-V3-Thinking, a hundred-billion-scale MoE reasoning model trained with a new RL framework built on one principle: each prompt must matter. Scaling RL to this size exposes critical inefficiencies-zero-variance prompts that waste rollouts, unstable importance sampling over long horizons, advantage inversion from standard reward models, and systemic bottlenecks in rollout processing. To overcome these challenges, we introduce several unified innovations: (1) Multi-Stage Zero-Variance Elimination, which filters out non-informative prompts and stabilizes group-based policy optimization (e.g. GRPO) by removing wasted rollouts; (2) ESPO, an entropy-adaptive optimization method that balances token-level and sequence-level importance sampling to maintain stable learning dynamics; (3) a Router Replay strategy that aligns training-time MoE router decisions with inference-time behavior to mitigate train-infer discrepancies, coupled with a reward model adjustment to prevent advantage inversion; (4) a high-throughput RL system with FP8-precision rollouts, overlapped reward computation, and length-aware scheduling to eliminate performance bottlenecks. Together, these contributions form a cohesive pipeline that makes RL on hundred-billion-scale MoE models stable and efficient. The resulting model delivers strong performance across both internal and public evaluations.