🤖 AI Summary
This work addresses the susceptibility of large language models to superficial perturbations in reasoning tasks and the challenges of diversity collapse and vanishing gradients in existing GRPO methods. To mitigate training instability, the authors propose TA-GRPO, a transformation-augmented framework that generates semantically equivalent variants for each problem—such as paraphrases or variable renamings—and aggregates rewards across these variants to compute the advantage function. This approach theoretically reduces the distributional shift between training and testing, thereby enhancing generalization. Experimental results demonstrate significant performance gains on mathematical reasoning benchmarks, with improvements of up to 9.84 and 5.05 Pass@k points on AMC12/AIME24 and GPQA-Diamond, respectively.
📝 Abstract
Large language models trained via next-token prediction are fundamentally pattern-matchers: sensitive to superficial phrasing variations even when the underlying problem is identical. Group Relative Policy Optimization (GRPO) was designed to improve reasoning, but in fact it worsens this situation through two failure modes: diversity collapse, where training amplifies a single solution strategy while ignoring alternatives of gradient signal, and gradient diminishing, where a large portion of questions yield zero gradients because all rollouts receive identical rewards. We propose TA-GRPO (Transform-Augmented GRPO), which generates semantically equivalent transformed variants of each question (via paraphrasing, variable renaming, and format changes) and computes advantages by pooling rewards across the entire group. This pooled computation ensures mixed rewards even when the original question is too easy or too hard, while training on diverse phrasings promotes multiple solution strategies. We provide theoretical justification showing that TA-GRPO reduces zero-gradient probability and improves generalization via reduced train-test distribution shift. Experiments on mathematical reasoning benchmarks show consistent Pass@k improvements, with gains up to 9.84 points on competition math (AMC12, AIME24) and 5.05 points on out-of-distribution scientific reasoning (GPQA-Diamond).