Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the reproducibility challenges in training large language models (LLMs) for long-chain reasoning—stemming from insufficient transparency in data and methodology—this paper proposes an end-to-end training framework. First, it constructs a high-quality, small-scale chain-of-thought (CoT) supervised fine-tuning (SFT) dataset. Second, it introduces Gradient-Preserving Policy Optimization (GPPO), a novel policy gradient algorithm featuring gradient-preserving trajectory pruning that mitigates the exploration suppression and suboptimal-trajectory neglect inherent in conventional pruning strategies, thereby enhancing negative-sample learning efficiency and policy exploration. Third, it synergistically integrates SFT with GPPO-based reinforcement learning for stable and efficient training. Empirically, the method achieves state-of-the-art performance on mathematical and programming reasoning benchmarks: 90.5% and 83.2% accuracy on AIME 2024 and 2025, respectively, and 66.0% and 58.1% on LiveCodeBench V5 and V6.

Technology Category

Application Category

📝 Abstract
We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. Although there are already many excellent works related to inference models in the current community, there are still many problems with reproducing high-performance inference models due to incomplete disclosure of training details. This report provides an in-depth analysis of the reasoning model, covering the entire post-training workflow from data preparation and long Chain-of-Thought supervised fine-tuning (long CoT SFT) to reinforcement learning (RL), along with detailed ablation studies for each experimental component. For SFT data, our experiments show that a small number of high-quality data sources are more effective than a large number of diverse data sources, and that difficult samples can achieve better results without accuracy filtering. In addition, we investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO) that gently backpropagates gradients from clipped tokens. GPPO not only enhances the model's exploration capacity but also improves its efficiency in learning from negative samples. Klear-Reasoner exhibits exceptional reasoning abilities in mathematics and programming, scoring 90.5% on AIME 2024, 83.2% on AIME 2025, 66.0% on LiveCodeBench V5 and 58.1% on LiveCodeBench V6.
Problem

Research questions and friction points this paper is trying to address.

Improves reasoning models via Gradient-Preserving Clipping Policy Optimization
Addresses incomplete training details in high-performance inference models
Enhances exploration and learning from negative samples in RL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Long Chain-of-Thought supervised fine-tuning
Gradient-Preserving clipping Policy Optimization
High-quality data over diverse sources
Z
Zhenpeng Su
Klear Team, Kuaishou Technology
Leiyu Pan
Leiyu Pan
Tianjin University
Natural Language ProcessingMultilingualMachine Translation
X
Xue Bai
Klear Team, Kuaishou Technology
D
Dening Liu
Klear Team, Kuaishou Technology
Guanting Dong
Guanting Dong
Remin University of China
LLM Reasoning & AlignmentDeep Search AgentAgentic RL
J
Jiaming Huang
Klear Team, Kuaishou Technology
W
Wenping Hu
Klear Team, Kuaishou Technology
Guorui Zhou
Guorui Zhou
Unknown affiliation
Recommender System,Advertising,Artificial Intelligence,Machine Learning,NLP