🤖 AI Summary
Large language models (LLMs) trained via reinforcement learning with verifiable rewards often generate excessively long and redundant responses to maximize accuracy, severely degrading reasoning efficiency. To address this, we propose Group Filtered Policy Optimization (GFPO), a framework that jointly optimizes response length and token efficiency (reward per token) during training via grouped multi-sampling and response filtering. We further enhance GFPO with an Adaptive Difficulty Allocation mechanism (AD-GFPO), enabling efficient training–inference trade-offs on the Phi-4-reasoning model. Experiments on STEM and programming benchmarks—including AIME 2024/2025 and GPQA—demonstrate that AD-GFPO reduces response length inflation by 46%–71%; when combined with token-efficiency optimization, reductions reach 71%–85%, with no accuracy loss. To our knowledge, this is the first RLHF paradigm that strategically increases training compute to substantially reduce inference cost while preserving performance.
📝 Abstract
Large language models trained with reinforcement learning with verifiable rewards tend to trade accuracy for length--inflating response lengths to achieve gains in accuracy. While longer answers may be warranted for harder problems, many tokens are merely "filler": repetitive, verbose text that makes no real progress. We introduce GFPO (Group Filtered Policy Optimization), which curbs this length explosion by sampling larger groups per problem during training and filtering responses to train on based on two key metrics: (1) response length and (2) token efficiency: reward per token ratio. By sampling more at training time, we teach models to think less at inference time. On the Phi-4-reasoning model, GFPO cuts GRPO's length inflation by 46-71% across challenging STEM and coding benchmarks (AIME 24/25, GPQA, Omni-MATH, LiveCodeBench) while maintaining accuracy. Optimizing for reward per token further increases reductions in length inflation to 71-85%. We also propose Adaptive Difficulty GFPO, which dynamically allocates more training resources to harder problems based on real-time difficulty estimates, improving the balance between computational efficiency and accuracy especially on difficult questions. GFPO demonstrates that increased training-time compute directly translates to reduced test-time compute--a simple yet effective trade-off for efficient reasoning.