🤖 AI Summary
This work proposes a novel contrastive learning framework that explicitly integrates Noise Contrastive Estimation (NCE) into online reinforcement learning for large language models, circumventing the need for complex heuristics such as asymmetric clipping and zero-variance filtering commonly used in existing GRPO-based methods. By constructing multi-label sets of positive and negative samples and directly maximizing the likelihood of positive samples, the approach eliminates conventional advantage estimation and post-processing steps. The method achieves competitive or superior performance compared to strong baselines—including DAPO and online DPO—on multiple challenging mathematical reasoning benchmarks, significantly enhancing the model’s reasoning capabilities while maintaining training simplicity and efficiency.
📝 Abstract
GRPO is a standard approach to endowing pretrained LLMs with reasoning capabilities. It estimates the advantage of an outcome from a group of $K$ outcomes, and promotes those with positive advantages inside a trust region. Since GRPO discriminates between good and bad outcomes softly, it benefits from additional refinements such as asymmetric clipping and zero-variance data filtering. While effective, these refinements require significant empirical insight and can be challenging to identify. We instead propose an explicit contrastive learning approach. Instead of estimating advantages, we bifurcate $K$ outcomes into positive and negative sets, then maximize the likelihood of positive outcomes. Our approach can be viewed as an online instantiation of (multi-label) noise contrastive estimation for LLM reasoning. We validate our method by demonstrating competitive performance on a suite of challenging math benchmarks against strong baselines such as DAPO and online DPO.