Asymmetric Proximal Policy Optimization: mini-critics boost LLM reasoning

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RL4LLM, conventional critics suffer from unstable value estimation under sparse rewards and long reasoning chains, coupled with high training costs. To address these challenges, this paper proposes AsyPPO—a lightweight PPO method built upon an asynchronous mini-critic architecture. Its core innovations include: (i) a sharded mini-critic design to reduce parameter overhead and enable parallelized critic updates; (ii) an advantage masking mechanism to suppress spurious credit assignment; and (iii) state filtering and entropy regularization guided by inter-critic disagreement, effectively mitigating anomalous exploration and filtering out uninformative gradient signals. Experiments demonstrate that AsyPPO achieves significant performance gains—outperforming strong baselines such as GRPO by 3–6% on the Qwen3 series (4B/8B/14B) LLMs—using only 5,000 training samples. Moreover, it substantially improves training stability and convergence robustness.

Technology Category

Application Category

📝 Abstract
Most recent RL for LLMs (RL4LLM) methods avoid explicit critics, replacing them with average advantage baselines. This shift is largely pragmatic: conventional value functions are computationally expensive to train at LLM scale and often fail under sparse rewards and long reasoning horizons. We revisit this bottleneck from an architectural perspective and introduce Asymmetric Proximal Policy Optimization (AsyPPO), a simple and scalable framework that restores the critics role while remaining efficient in large-model settings. AsyPPO employs a set of lightweight mini-critics, each trained on disjoint prompt shards. This design encourages diversity while preserving calibration, reducing value-estimation bias. Beyond robust estimation, AsyPPO leverages inter-critic uncertainty to refine the policy update: (i) masking advantages in states where critics agree and gradients add little learning signal, and (ii) filtering high-divergence states from entropy regularization, suppressing spurious exploration. After training on open-source data with only 5,000 samples, AsyPPO consistently improves learning stability and performance across multiple benchmarks over strong baselines, such as GRPO, achieving performance gains of more than six percent on Qwen3-4b-Base and about three percent on Qwen3-8b-Base and Qwen3-14b-Base over classic PPO, without additional tricks. These results highlight the importance of architectural innovations for scalable, efficient algorithms.
Problem

Research questions and friction points this paper is trying to address.

Reducing value-estimation bias in large language model reinforcement learning
Improving learning stability under sparse rewards and long reasoning horizons
Enhancing policy updates through lightweight mini-critics and uncertainty utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses lightweight mini-critics on disjoint prompt shards
Leverages inter-critic uncertainty to refine policy updates
Filters high-divergence states from entropy regularization
🔎 Similar Papers
No similar papers found.