🤖 AI Summary
This work addresses the inefficiency in data utilization during late-stage reinforcement learning (RL) training caused by an overabundance of trivially solvable samples with near-perfect pass rates (pass rate = 1). To mitigate this issue, the authors propose Composition-RL, a novel method that introduces the first automated composition strategy specifically designed for high-pass-rate prompts. This approach generates verifiable composite prompts and integrates curriculum learning to progressively increase composition depth. By enabling cross-domain prompt fusion, Composition-RL significantly enhances the reasoning capabilities and generalization performance of large language models ranging from 4B to 30B parameters, while simultaneously improving the utilization efficiency of verifiable data in RL training.
📝 Abstract
Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.